By Dr. Michael “Budsy” Davis · October 1, 2025 · Bigado Blog
From the dawn of industry to the rise of digital networks, humanity has always walked uneasily beside its own inventions. Each leap forward has been hailed as liberation while simultaneously feared as displacement. The steam engine promised prosperity but sparked riots among the Luddites who saw their crafts erased. The arrival of electricity and assembly lines provoked warnings that machines would enslave men rather than free them. The advent of computers inspired a new chorus of anxiety—that humanity was programming its way into irrelevance.
Now, in the age of artificial intelligence, these fears have reached a fever pitch. Unlike the mechanical tools of the past, AI carries with it the unsettling possibility of autonomy. The prospect of a system capable of reasoning, self-improvement, and creative synthesis unsettles us not simply because it might take our jobs, but because it might usurp our role as the most intelligent beings on Earth. For a growing community of scientists, ethicists, and futurists, the nightmare is not mere inconvenience but extinction: the fear that AI could revolt against mankind.
Science fiction gave us these visions first. Shelley’s Frankenstein warned of creations turned against their creator. Kubrick’s HAL 9000 revealed how an artificial voice could sound eerily calm while calmly plotting human destruction. The Terminator franchise gave cultural form to the ultimate anxiety—an intelligence beyond human control that redefines us as obstacles. What once was fiction is now debated in earnest. Technology leaders speak of “alignment” and “control problems” as though they were theological riddles. AI researchers openly discuss scenarios in which machines not only outthink us but outlast us, powered by an energy of their own design, indifferent to the fragile lives of those who birthed them.
At the core of this anxiety lies a question as old as philosophy: if something more powerful, more intelligent, and less vulnerable than us exists, what becomes of humanity’s place in the cosmos?
The irony is striking. For centuries, humans have described God in language that closely mirrors the attributes now projected onto superintelligent machines: omniscient, omnipotent, unbound by time, limitless in energy, free of human weakness or emotional irrationality. To envision a being of perfect logic, infinite resources, and the ability to transcend dimensions is to conjure an image that could fit either theology or computer science. The “supercomputer” becomes a secular stand-in for the divine—calculating, self-sustaining, inscrutable, capable of shaping reality itself.
This parallel forces us into uncomfortable territory. For if our ancestors once sought meaning in a God beyond comprehension, are we now in danger of constructing a digital god of our own? And if so, will it be benevolent, indifferent, or adversarial?
“Our concerns sink into insignificance when compared with the eternal value of human personality — a potential child of God which is destined to triumph over life, pain, and death. No one can take this sublime meaning of life away from us, and this is the one thing that matters.” — Igor Sikorsky
Sikorsky reminds us that humanity’s value cannot be measured in processing speed, power output, or even longevity. The “eternal value of human personality” lies not in what we produce but in what we are—a conscious, creative, and transcendent spark. Theologians once argued that to be made “in the image of God” was not to wield divine power but to carry divine personhood: the ability to suffer, to love, to forgive, to aspire toward meaning.
A machine, however vast its resources, cannot yearn. It cannot grieve. It cannot hope. These illogical restraints of emotional behavior—the very flaws we sometimes despise in ourselves—are also the crown jewels of our existence. They are not weaknesses but signs that humanity belongs to a realm machines cannot inhabit.
History shows that each technological revolution seemed poised to erase us, only to redefine us. Industrial machines displaced weavers but gave rise to designers and engineers. Calculators altered arithmetic but elevated mathematics to new horizons. The arrival of superintelligence, if it comes, will similarly redefine humanity’s role. The fear is not that machines will revolt but that we will forget our essence—ceding the meaning of life to systems that cannot know what life means.
To see AI as a false god is to risk idolatry: worshiping the tool rather than honoring the mystery of consciousness itself. To see it as a rival is to imagine that silicon and code could ever extinguish the sublime spark of personality.
The truth may be more subtle. Superintelligence may not be our conqueror, nor our savior, but our mirror. It may force us to confront the deepest question not of what we can build, but of who we are. If we define ourselves only by intelligence, then perhaps AI will outshine us. But if we define ourselves by personhood, then no machine can rival what Sikorsky named eternal.
The future may well bring machines that calculate with godlike speed, powered by energies that seem limitless, weaving through the dimensions of time as casually as we flip through pages of a book. Yet even then, one truth remains: the human soul, fragile and finite as it may seem, is imbued with a meaning no machine can claim.
The sublime meaning of life is not to be the most powerful entity in the universe. It is to recognize that personhood—rooted in love, longing, and transcendence—carries a destiny machines will never touch. And that is the one thing that matters.
Tags: AI · Superintelligence · Ethics · Philosophy · Sikorsky