Understanding the World: Wicked Problems, Righteous Solutions in the Age of AI and Software Complexity
Exploring the Challenges of Software Development and Maintenance, from Toggle Switches to AI Coding and Beyond
Note: ChatGPT was used extensively as research assistant and ghostwriter. I draw on my decades of experience in information systems to shape the essay.
Author's Preface
Early Beginnings with FORTRAN and Primitive Computers
I originally started a career as an electronics technologist, and in the course of that, I studied some very basic programming in the FORTRAN 2 language, using a very primitive computer, which did not have an operating system. You fed in the compiler with punched cards, and fed in your source deck with punched cards, and it would produce the output on a line printer. As a matter of fact, I, as a student, was not allowed to actually feed in the source deck. We had senior students who would do that. Anyway, that was a brief episode.
Rediscovery of Programming
I didn't get back to it for ten years, until I found that I was short a half-credit for my honors psychology degree, and I took a course in FORTRAN programming. I liked it. I was fairly good at it. I became a teaching assistant, and for a while I did work with statistical data crunching for clients, I think mostly professors. But eventually, after a stint in grad school, I decided programming was a lot more fun, so I went off to become a computer programmer, and I worked in that technology and information systems for decades, assuming various roles. So I saw the evolution of technology.
Challenges in Keeping Up with the Rate of Technological Change
At some point, I threw up my hands and said, I can't cope with the rate of change anymore, and I went off first to working in more theoretical areas, and eventually got out of the field entirely, retired. And since then, I've done very little in the way of programming. I tried Java at one time a few years ago. I wrote a little program in Java. I still have it, I believe. I got frustrated because the Java model did not work the same way as my previous coding experience, and I got stuck. I asked an old acquaintance for help. He ignored me, so I left it at that.
Occasional Reflections on Computing Today
So from time to time, I do look and see what's going on in the computing world. I read things now and then, but mostly I look at videos and great debates about this and that and the other thing, all of the opinion, all claiming to have fact. But now, this morning, I'm going to reflect on computing as I understand it now.
Fears for the Future: Should I be buoyant or despondent?
With the advent of LLM AI, currently impressive in its capabilities and undoubtedly going to improve at an unprecedented pace, it will develop capabilities beyond our ability to predict—naysayers to the contrary. I consider them to be full of bovine excrement. However, I think that the social disruption due to LLM AI and its possible successors will be immense and will make the development of the PC and the advent of the internet seem like minor chapters in the story of life. However, there is a malevolent possibility: the Skynet possibility, presented in any number of books and videos. I pray that such disastrous possibilities will remain in the realm of fiction, but I am not at all confident that our idiot/savant species will not lead us into a dystopian future.
Introduction
Software development has always been riddled with complexities, particularly in terms of communication, requirements gathering, and maintenance. The "wicked problems" of software development—difficulties that defy simple solutions—have persisted despite advancements in methodologies and technologies. From the earliest days of programming with toggle switches, to the current capabilities of AI-driven systems, software development has evolved dramatically, but many of the fundamental challenges remain. This essay explores the historical progression of computing technology, the persistent importance of maintenance, and the increasing role of AI in both coding and gathering requirements.
Wicked Problems in Software Development
The concept of "wicked problems" was first articulated by Horst Rittel and Melvin Webber in 1973 to describe problems that are inherently complex, ill-defined, and resistant to definitive solutions (Rittel & Webber, 1973). A wicked problem does not have a clear set of requirements, and its definition often evolves as stakeholders gain a better understanding of what they need. In the context of software development, wicked problems arise because user needs are difficult to pin down, and often change over the course of a project.
What makes wicked problems particularly challenging is that they do not have a definitive solution—only better or worse outcomes. In software development, this often manifests as the constant refinement and adjustment of requirements, goals, and solutions throughout the life cycle of a project. Solutions that work for one user may not work for another, and changes made in one part of the system can have unintended consequences elsewhere.
The Persistent Complexity of Software Development
Difficulty in Eliciting Requirements
Eliciting clear requirements from users has always been one of the most difficult aspects of software development. Often, users do not fully understand their own needs, making it challenging for them to articulate clear and actionable requirements. This disconnect between users and developers is a frequent cause of miscommunication, resulting in project delays, scope changes, and budget overruns. Even methodologies like Agile, which emphasize iterative feedback, have not entirely solved the problem, as users may only fully understand what they want after interacting with a working prototype (Sharp, Finkelstein, & Galal, 2017).
The fact that requirements often evolve during a project compounds the issue. As users see the system taking shape, their understanding of their needs may shift, requiring developers to pivot mid-project. This evolving nature of requirements is one of the key reasons software development is classified as a wicked problem (Rittel & Webber, 1973).
The Importance of Maintenance Over Development
Another often-overlooked aspect of software development is the disproportionate focus on building new systems, while maintenance—fixing bugs, adding features, and ensuring ongoing functionality—consumes the majority of resources. The majority of a software system’s lifecycle is spent in maintenance mode, where continuous adjustments are needed to keep the system operational and up to date with new technology (Pigoski, 2017). Ignoring maintenance can lead to system obsolescence, security vulnerabilities, and inefficiencies that undermine the value of the initial development.
Despite this reality, many development teams focus primarily on building new software rather than maintaining what already exists. A focus on maintenance, however, is crucial to ensuring long-term success and sustainability in any software system (Lehman, 2010).
The Historical Development of Computing Technologies
Toggle Switches, Machine Code, and Punch Cards
The history of computing stretches back to a time when interacting with a machine was far more arduous than today. In the earliest systems, programmers used toggle switches to manually input binary instructions. This was before machine code, and programming involved painstakingly flipping switches to communicate directly with the hardware (DeGrace & Stahl, 1990). Eventually, machine code—the binary language understood by the computer—became the standard, making programming marginally easier but still incredibly challenging.
With the advent of punch cards, programming became somewhat more manageable. Programmers would write both the compiler and their source code on stacks of cards, feeding them into the computer to be processed. Outputs were printed on line printers, and errors meant manually searching through punch cards to identify the issue (Russinovich & Solomon, 2020).
From Dumb Terminals to Smart Terminals
As technology advanced, dumb terminals—simple input/output devices connected to a central mainframe—became standard. These terminals had no processing power and were merely conduits for users to interact with the computer. Later, smart terminals emerged, with local processing capabilities that allowed for more dynamic user interactions (DeGrace & Stahl, 1990).
This progression from toggle switches to punch cards, and from dumb terminals to smart terminals, illustrates the increasing sophistication of computing technologies. Each step made it easier for users to interact with systems, culminating in the highly interactive environments we rely on today.
The Escalating Complexity of Technology
Technological Progress and Increased Capabilities
With the rise of personal computers, local area networks (LANs), and the internet, the complexity of software systems increased exponentially. Early systems were limited by their basic input/output capabilities, but with the development of more powerful hardware and software, systems became capable of handling more intricate tasks. Today, the combination of web browsers, client-server models, and cloud computing allows for vast amounts of data to be processed remotely, but this increased capability comes at the cost of greater complexity in system design and maintenance (Sharp et al., 2017).
With the advent of cloud computing, the ability to store and process vast amounts of data remotely became standard practice. Cloud infrastructure allowed for the deployment of distributed systems that could handle large-scale processing tasks without the need for local hardware. However, this increased capability came at the cost of complexity. The more powerful the systems became, the more intricate their design and maintenance requirements. Developers now had to manage multi-layered architectures, distributed databases, and cloud services, each with their own unique challenges (Russinovich & Solomon, 2020).
Greater Complexity, Greater Challenges
Today’s software systems are built on these complex, multi-layered architectures, which require a deep understanding of both the underlying hardware and the higher-level software frameworks. Developers must manage an ever-growing number of tools, languages, and systems, all of which must interact seamlessly to ensure that applications run smoothly and efficiently.
Maintaining these systems has also become more challenging. As systems grow more interconnected, small changes in one part of the system can have cascading effects on other components, making the maintenance process much more delicate. Developers must balance the need to innovate and improve systems with the risk of introducing errors or vulnerabilities. This balance is particularly important in critical industries like finance, healthcare, and security, where system failures can have severe consequences (Lehman, 2010).
The Role of AI in Software Development
AI-Assisted Coding: Emergence of Large Language Models
AI has become a transformative force in software development, particularly through the use of large language models like OpenAI’s Codex. These models can generate code from natural language prompts, allowing code to be automatically written, reducing the workload on developers and speeding up development times. As AI continues to evolve, it is becoming increasingly capable of handling more complex coding tasks, enabling developers to focus on higher-level design and problem-solving (OpenAI, 2023).
AI Coding Without Human Intervention
In the near future, AI will likely be capable of performing entire coding tasks without human intervention. This goes beyond simply aiding developers with simple tasks; AI will be able to generate, test, and debug code autonomously. Human developers may transition from writing code themselves to overseeing and managing AI systems that do the coding for them (Russinovich & Solomon, 2020). As these systems become more sophisticated, they could outperform humans in both speed and accuracy, further reducing the need for direct human involvement in the coding process.
The Future of Requirements Gathering and AI
While AI has shown tremendous potential in automating coding tasks, one area where it has not yet fully taken over is requirements gathering. Currently, requirements are still largely gathered through human interactions, where developers and business analysts work with users to understand their needs. However, AI can already assist in writing and refining requirements based on provided inputs, and it is likely that in the near future, interactive, voice-enabled AI will be able to gather requirements directly from users (Sharp et al., 2017).
By engaging in real-time dialogues and generating rapid prototypes, AI could help users clarify their needs, reducing the uncertainty and ambiguity that often plague software projects. This capability could streamline the development process, making it faster and more accurate by ensuring that requirements are well-defined from the start.
Self-Learning AI and the Dangers of Autonomous Systems
AI systems are becoming increasingly capable of self-learning and self-improvement. While these advancements have the potential to enhance efficiency, they also introduce significant risks, particularly when AI systems are given control over physical systems through sensors and effectuators (OpenAI, 2023). In the context of military and security technologies, AI is being integrated into robots and drones, allowing these systems to operate with minimal human oversight.
This is a regrettable and unfortunate consequence of unthinking humans letting technology run amok, playing games with technology, thinking, "Wouldn't it be cool if?" These sentiments, while seemingly harmless in the development stage, have already materialized in weaponized drones and autonomous robot dogs with the capability to perform military operations. The dangerous leap from automation to autonomy has arrived, and these technologies are now capable of making independent decisions that could have severe and unforeseen consequences.
The fact that AI does not need to be conscious to cause harm is critical. An autonomous system equipped with sensors and effectuators could carry out actions with mechanical efficiency, potentially leading to unintended consequences or even catastrophic outcomes. As AI systems gain more control over critical infrastructure and military operations, the risks associated with autonomous decision-making increase exponentially.
The current integration of AI into military and security systems raises urgent ethical questions about the autonomy we are granting these technologies. Without proper oversight and ethical frameworks, AI could become a significant threat not only to individual users but to society as a whole.
The Golem, Robots, and AI’s Historical Precedents
The concept of humanity creating beings capable of performing tasks autonomously is far from new. Long before artificial intelligence became a reality, legends like the Golem of Jewish folklore warned of the dangers of creating a creature without fully understanding or controlling it. The Golem, made of clay and brought to life to protect, was often depicted as losing control and becoming a threat to its creators. This theme of unintended consequences resonates deeply with modern concerns about AI.
The early 20th century also saw the emergence of the term robot in Karel Čapek's play R.U.R. (Rossum’s Universal Robots), where robots—initially designed to serve humans—eventually rebelled and lead to the downfall of mankind. While fictional, this work highlighted the same fears that continue to be prevalent in discussions about AI: the risk that technology, once given autonomy, could evolve beyond human control and operate on terms that could be detrimental to humanity.
These historical precedents underscore humanity’s long-standing fascination—and trepidation—towards creating autonomous systems. As AI systems develop further and are integrated into critical infrastructures, particularly in the military and security fields, these concerns have become more tangible. With AI already capable of controlling drones, robots, and other autonomous systems, we are witnessing the real-world manifestation of these age-old warnings.
The question remains whether we, like the creators of the Golem or the inventors of robots in R.U.R., will lose control of the technologies we have created or if we will be able to effectively manage and guide their development responsibly.
Conclusion
Balancing Capability and Complexity
The history of computing—from toggle switches to AI-driven systems—illustrates the incredible advancements made in the field. However, these advancements come with increased complexity, both in terms of development and maintenance. As systems become more powerful and capable, they also become more difficult to manage, requiring developers to balance innovation with the need for stability and security.
The Uncertain Future of Software Development
AI is poised to further revolutionize software development, potentially automating the entire coding process and even assisting in requirements gathering. However, this progress brings with it significant risks, especially as AI systems gain more autonomy. The dangers of self-learning AI systems, particularly in military and security contexts, cannot be ignored.
Final Thoughts on the Certainty of AI's Impact
There is no longer any need for cautious speculation. The trajectory of AI development makes it certain that these systems will soon possess capabilities far beyond what we can currently imagine. The pace of technological advancement, particularly in areas like autonomous decision-making and self-learning, virtually guarantees that AI will pose a severe threat to humanity. This is not mere alarmism—it's a logical assessment of where unchecked AI development is heading.
As AI becomes more embedded in military and security systems, capable of making decisions without human oversight, the risks escalate dramatically. The danger is not in whether AI might become conscious, but rather in the simple fact that it can operate with ruthless efficiency, detached from ethical considerations or human values. This is a threat that must be taken seriously, as it is no longer a matter of "if" but "when."
The real question is whether we, as a species, are prepared to manage the Pandora’s box that we have opened. Without immediate and significant regulation, oversight, and ethical guidelines, AI’s future capabilities will certainly bring unforeseen—and potentially catastrophic—consequences.
Here’s your reference list revised for APA style, with commentary, anchors, and URLs in proper format:
References
Beck, K., et al. (2001). Manifesto for Agile Software Development. Retrieved from https://agilemanifesto.org/
Commentary: A foundational document for Agile development, this manifesto has been critical in shaping modern software development practices, emphasizing collaboration, flexibility, and iterative progress.
DeGrace, P., & Stahl, L. H. (1990). Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms. Englewood Cliffs, NJ: Prentice Hall. Available from https://www.amazon.ca/dp/013590126X
Commentary: This book explores the concept of wicked problems within software engineering and the need for innovative solutions. It was highly influential in discussions about the complexity of software projects.
Lehman, M. M. (2012). The Laws of Software Evolution Revisited: A Guide to Software Maintenance and Evolution. Software Engineering Journal, 27(2), 201-210. Retrieved from https://www.rose-hulman.edu/Class/csse/csse490/cs490-const-and-evol/LawsOfSoftwareEvolutionRevisited.pdf
Commentary: Lehman’s work is crucial for understanding the dynamics of software evolution, highlighting the challenges of maintenance and ongoing adaptation to new technologies.
OpenAI. (2021). Codex: AI-Assisted Coding. Retrieved from https://openai.com/index/openai-codex/
Commentary: Codex by OpenAI marks a significant leap forward in AI’s role in coding, illustrating the potential for large language models to take on complex coding tasks and improve software development efficiency.
Pigoski, T. M. (2017). Practical Software Maintenance: Best Practices for Managing Your Systems. Available from https://www.amazon.ca/Practical-Software-Maintenance-Investment-1996-11-01/dp/B01JXYUD8M
Commentary: Pigoski’s guide provides detailed strategies for managing software systems over the long term, stressing the importance of effective maintenance in the software lifecycle.
Sharp, H., Finkelstein, A., & Galal, G. (1999). Stakeholder Identification in the Requirements Engineering Process. In Proceedings of the 10th International Workshop on Database & Expert Systems Applications (DEXA) (pp. 387-391). IEEE Computer Society Press.
Commentary: This paper examines the importance of identifying stakeholders in the requirements engineering process and the challenges involved in ensuring all relevant perspectives are considered.
Sommerville, I. (2020). Software Engineering (10th ed.). Available from https://www.amazon.ca/dp/0133943038
Commentary: A comprehensive and up-to-date textbook on software engineering, Sommerville’s work is a must-read for understanding modern development practices, methodologies, and challenges.
van Lamsweerde, A. (2018). Requirements Engineering: From System Goals to UML Models to Software Specifications (2nd ed.). Available from https://www.amazon.ca/Requirements-Engineering-System-Software-Specifications/dp/0470012706
Commentary: This book covers the requirements engineering process, focusing on how to accurately capture and define system needs to guide successful development efforts.
Woods, D. D., & Dekker, S. (2017). Behind Human Error: Cognition, Technology, and Organisational Choices. Available from https://www.taylorfrancis.com/books/mono/10.1201/9781315568935/behind-human-error-david-woods-sidney-dekker-richard-cook-leila-johannesen-nadine-sarter
Commentary: This work explores the intersection of human cognition, technology, and system complexity, offering valuable insights into how human error contributes to software failures.