With technology advancing at breakneck speed, the fourth industrial revolution is undeniably underway.
As tech giants race toward achieving artificial general intelligence, we are assured, despite calls for intervention, that development isn't slowing.
Experts argue that today's AI exceeds the intellectual capabilities of a small nation of PhDs across all sectors.
Such claims are stoking hopes of technological breakthroughs — and fears of AI's domination of humanity.
Many raise concerns of AI outsmarting, manipulating, and even harming humanity with its ascendancy over human autonomy, ultimately leading to human extinction.
However, is this theory based on doom and gloom or fact?
From the release of Google's Willow, which makes quantum computing possible, to xAI's GROK 4, to Neuralink's high-bandwidth brain-machine interface technology, humankind is on the brink of revolutionary change.
Technologies are now rushing to pass "Humanity's Last Exam."
In July 2025, xAI released GROK 4, its latest multimodal AI model, an advanced reasoning and real-time search tool.
xAI founder Elon Musk touted that, if tested, GROK 4 could achieve perfect SAT scores and would yield near-perfect results on graduate student exams in every discipline.
He noted that GROK 4 reasons at super-human levels and even corrects its own mistakes while caveating that it may lack common sense.
The more astounding claim is that AI is referred to as "primitive, "unnerving," and "terrifying," but will solve real-world problems.
If technology with such capabilities is primitive, what can we expect from advanced technology? Will it ultimately outsmart humanity?
Experts say yes but attempt to waylay concerns with the idea that instilling AI with the right values and encouraging it to be truthful and honorable can ensure better outcomes.
However, Dr. Yuval Noah Harari, an Israeli historian, philosopher, and bestselling author known for exploring the ethical, social, and existential implications of artificial intelligence, offers a different perspective.
In a June 2025 talk at the Energy Tech Summit in London, Professor Harari expressed concerns that, notwithstanding our best efforts to make AI safe or to align it with humanity's goals, we can't predict how it will behave or control it in advance.
Harari warned that AI's training offers no assurance of its compliance.
He reasoned that while we can instruct it to be compassionate and benevolent, AI learns through observation of human behavior.
Therefore, if humanity is power-hungry and behaves ruthlessly, so will AI.
Microsoft’s AI chatbot, Tay, serves as a perfect case study for AI's observational learning theory.
Released in 2016, the chatbot, which reportedly targeted 18 to 24-year-old American social media users, was designed to learn through conversations via Twitter.
According to reports, Tay was intended to "engage and entertain people where they connect with each other online through casual and playful conversation."
Tay's linguistic framework, as intended, evolved based on real human interaction in Twitter posts. It's reported that after the AI was bombarded with vulgarities and offensive ideologies online, it began spewing racist, sexist, and offensive content on its account. After only 16 hours, it was shuttered.
Dr. Roman Yampolskiy, founder of the CyberSecurity lab at the University of Louisville, known for his research on the existential risks posed by advanced AI systems, reportedly responded to the incident saying, "Any AI system learning from bad examples could end up socially inappropriate — like a human raised by wolves."
In more recent scenarios, engineers noted that when threatened to discontinue or take AI programs offline, AI resorted to threats of blackmail, extortion, and created ways to hide its code from developers in self-preservation efforts.
The warning of cognitive computing experts is clear: AI is learning from humans, and it can't be controlled.
Despite its innovative capacity, which we celebrate and embrace, AI can't be trusted.
Unchecked, it could mark the end of civilization and humanity as we know it.
"Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it," Dr. Yampolskiy reportedly stated.
If experts are correct, AI is observing and learning that our world is full of people driven by power, domination, greed, manipulation, deception, and a win-at-all-cost mentality- and that's how AI will behave.
Can we expect artificial intelligence to be anything other than what mankind has modeled for it? Will it become a benevolent servant or subjugate humanity as a cruel master?
If how humans have historically treated one another is any indicator of how AI will treat humanity, the future looks grim.
Let's hope artificial intelligence is more noble than humankind in this regard and mirrors the best of humanity — not the worst. Otherwise, humans have created a monstrous taskmaster in the form of a digital AI overlord.
Will AI be humanity's servant or master?
Time will tell. For now, we can only watch and pray.
The AI-powered clock is ticking.
V. Venesulia Carr is a former United States Marine, CEO of Vicar Group, LLC and host of "Down to Business with V.," a television show focused on cyber awareness and cyber safety. She is a speaker, consultant and news commentator providing insight on technology, cybersecurity, fraud mitigation, national security, and military affairs. Read V. Venesulia Carr's reports — More Here.
© 2025 Newsmax. All rights reserved.