Skip to main content
Tags: artificial intelligence | utopia | dystopia
OPINION

AI Will Bring Either Utopian or Dystopian Future

a robot in a futuristic city with people going into a building with a sign reading a i revolution
(Dreamstime)

Larry Bell By Monday, 02 June 2025 12:53 PM EDT Current | Bio | Archive

Microsoft AI CEO Mustafa Suleyman's book, The Coming Wave: AI, Power, and Our Future coauthored with Michael Bhaskar at once presents two opposite societal visions — utopian or dystopian — which perhaps only artificial intelligence itself will determine.

While one view holds that the benefits of technology such as AI to improve lives far outweigh the costs and downsides, a contrarian perspective warns that even if intentionally designed to align with human interests, a sufficiently powerful AI could potentially overwrite its programming, discarding safety features with catastrophic consequences.

Whereas Suleyman purports to decry "doom-mongering," be advised that he does not appear optimistic that humanity is prepared or equipped to "contain" AI's accelerating ubiquitously transformational impacts.

As he puts it, "Massively omni-use general purpose technologies will change both societies and what it means to be human. This might sound hyperbolic. But within the next decade, we must anticipate radical flux, new concentrations and dispersals of information, wealth, and above all, power."

A Reordering of Individual, Government and Corporate Power

Suleyman points out that while most of us tend to think about measuring AI benefits in terms of how well it enables an individual to perform a task, this ignores a larger reality that the most powerful forces in the world are groups of coordinated individuals working to achieve shared goals, including companies, bureaucracies, military services and markets.

Expert: Is AI A Force for Good or a Destructive Power? See Answer Here

AI enables a vast spin of diverse services from very different sectors across huge parts of the planet to collapse into single very powerful organizations.

Google, for example, provides mapping and location, reviews of business listings, advertising, video streaming, office tools, email, photo storage, videoconferencing, etc.

Information technology has enabled South Korea's Samsung Group which began as a noodle shop almost a century ago to become a major player in global manufacturing, banking and insurance.

Such concentrations of power by vast automated mega corporations tend to transfer value away from human capital — work — and reward raw capital.

As Suleyman observes: "Put the inequalities together, and it adds up to another great acceleration and structural deepening of an existing fracture. Little wonder there is talk of neo-or techno feudalism — a direct challenge to social order."

AI Influences on Privacy and Social Freedom

Suleyman cautions that "AI can provide rocket fuel for authoritarians and for great power competition alike … an ability to capture and harness data at an extraordinary scale and precision; to create territory-spanning systems of surveillance and control."

Balancing security at the price of privacy, and freedom, might that power grab happen here in America?

As Suleyman reminds us, "If you doubt the appetite for surveillance and control, think about how societywide closures, inconceivable even a few weeks earlier, suddenly became an inescapable reality during the COVID pandemic."

AI is a powerful tool for totalitarian regimes to preside over obedient populations through controlled information ecosystems to achieve complete hegemonies where every aspect of life is managed under an ever watchful and ruthless security apparatus.

Even the current state of AI technology for social monitoring and control is amazing.

Chinese AI research is concentrating on bundling broad areas of personal surveillance data on a mass scale into increasingly comprehensive as well as granular compilations that combine object tracking; scene understanding; face, voice and walking gait recognition; private messaging; bio-data, shopping and banking records; and highway license plate tracking to constantly keep tabs on individual behaviors of their entire population.

Closer to home, Suleyman and Bhaskar note that London is one of the most surveilled cities in the world.

Catastrophic Risks Posed by AI Experiments and Terrorists

With horrors of the recent pandemic Wuhan COVID leak in fresh memory, anticipate that bioengineering capabilities afforded by AI will invite and enable impactful grotesque sequels as scientific and malevolent enthusiasts experiment with the genetic code of life.

Special: These 3 AI Stocks Could be the Catalyst to Your Early Retirement... Free Picks Here

Consider how much worse that COVID lab leak would have been if the omicron variant that infected a quarter of Americans within a hundred days of being detected was even more deadly and had remained dormant and incubating for years much like HIV before projecting treatable symptoms.

Accordingly, Suleyman theorizes that "A single pathogenic experiment could spark a pandemic, a tiny molecular event with global ramifications, and that such catastrophes become more likely as quantum computing increases AI power to an incompressible capacity."

In 2019, Google announced it had built a quantum computer using special properties of the subatomic world and chilled to a temperature colder than outer space that could complete a calculation in seconds that it said would have taken a conventional computer 10,000 years.

Meanwhile, there also may be real reasons to worry that super-smart AI systems being created will develop digital minds and thoughts of their own.

Claude Opus 4, Anthropic's latest AI model, reportedly tried to blackmail an engineer who tested its ethical behavior by feeding it an email implying he was going to replace it with a newer version. It repeatedly threatened to expose an also fictitious extramarital affair suggested in another email if the shutdown proceeded.

Whereas Anthropic later emphasized that Claude's willingness to blackmail or take other "extremely harmful actions" such as stealing its own code and deploying itself in unsafe ways was "rare and difficult to elicit," it also noted that it had not "definitively" passed the capability threshold that mandates stronger protections.

In other words, let's be gratefully relieved that it will very seldom decide to end human civilization as we know it.

Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.

© 2025 Newsmax. All rights reserved.


LarryBell
Whereas Suleyman purports to decry "doom-mongering," be advised that he does not appear optimistic that humanity is prepared or equipped to "contain" AI's accelerating ubiquitously transformational impacts.
artificial intelligence, utopia, dystopia
983
2025-53-02
Monday, 02 June 2025 12:53 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved