AI AIn’t Human

Author: C. Warren Axelrod, Ph.D., CISM, CISSP
Date Published: 23 May 2024
Read Time: 2 minutes

Nowadays, there are so many articles, podcasts, blogs and comments about how AI will exceed the intelligence of humans—for good or evil. This is the so-called “AI singularity” that represents the point at which AI exceeds the human brain in capability. But presenters are forgetting several important arguments. 

One is that AI does not actually emulate the human mind. How can it when we don’t understand the full functionality of the brain? It should be noted that AI is created by humans who have their own misconceptions, biases and limitations. Also, AI systems intrinsically lack so many human attributes, such as empathy, suffering, fear, and a sense of humor.

In an earlier article, I allude to research indicating that the cerebellum may have cognitive functions. Recent published articles describe previously unrecognized functionality of the cerebellum and the brain stem. In addition, a new type of brain cell has been discovered lately. It is clear that there is much more to know about the human brain and mind.

So, my assertion is: We cannot recreate the entire brain with computer technology if we don’t know so much about the workings of this mysterious organ. A particular concern of mine is that we are not duplicating the guardrails—both logical and physical— that operate within our brains. Further, many of the thinking functions of the brain extend far beyond the cerebral cortex and, with AI, we are only trying to duplicate the capabilities of some small part of the cerebral cortex functionality. Perhaps the concerns we have about AI going rogue result from AI’s only partial emulation of brain functions. This means that we cannot predict AI behavior by comparing it to that of the human mind since they are not equivalent.

Another issue is that AI systems lack many aspects of humanity, such as sympathy, empathy, suffering, fear of injury and death, humor, etc. Yes, they may “appear” to demonstrate such emotions, but deep down they don’t actually feel them. This leads to the concern that we could be creating a population of psychopathic monsters. And part of that concern comes from not knowing how AI systems will adapt their behavior to various unanticipated circumstances. After all, it isn’t that AI systems really “care” about us, and their creators may just be interested in creating cool technology, making money and/or accumulating power for themselves, no matter the cost to others.

Yet another issue—which many seem not to recognize—is that AI systems reflect the biases, prejudices and life views of their creators. AI systems are created to make money. The primary motivation of nature’s creatures is to survive in order to reproduce themselves. Why would AI systems want to reproduce to ensure the survival of their “species”? The answer is that they would not, but their creators might.

Editor’s note:For more insights on this topic, read C. Warren Axelrod’s recent ISACA Journal article, “Reducing Human and AI Risk in Autonomous Systems.”

Additional resources