Crosspost: On the existential risks of artificial intelligence
“My point here is not to demonstrate that such events are impossible. On the contrary, my point is that autonomous human-made entities already exist, and cause the exact same risks that AI alarmists are talking about, except they are real. In this context, evil AI fantasies are an anthropomorphic distraction.” — Romain Brette
On the existential risks of artificial intelligence
Let me quickly dismiss some misconceptions. Does ChatGPT understand language? Of course not. Large language models are (essentially) algorithms tuned to predict the next words. But here we don’t mean “word” in the human sense. In the human sense, a word is a symbol that means something. In the computer sense, a word is a symbol, to which we humans attribute meaning. When ChatGPT talks about bananas, it has no idea what a banana tastes like (well, it has no idea). It has never seen a banana or tasted a banana (well, it has never seen or tasted). “Banana” is just a node in a big graph of other nodes, totally disconnected from the outside world, and in particular from what “banana” might actually refer to. This is known in cognitive science as the “symbol grounding problem”, and it is a difficult problem that LLMs do not solve. So, maybe LLMs “understand” language, but only if you are willing to define “understand” in such a way that it is not required to know what words mean.
