Skip to main content
Tags: ai | wafer-scale | integration
OPINION

Can China's DeepSeek AI Outsmart U.S. Competitors?

Can China's DeepSeek AI Outsmart U.S. Competitors?

(Illia Martynov/Dreamstime.com)

Larry Bell By Monday, 21 July 2025 03:47 PM EDT Current | Bio | Archive

(Editor's note: The following opinion column does not constitute an endorsement of any product or service on the part of Newsmax.)  

In January of this year, a new Chinese AI start-up, DeepSeek created a $1 trillion Wall Street bloodbath shocking Silicon Valley.

Investors in American chipmakers headed for the exits as the tech-heavy Nasdaq fell 3.1%, driven by a 16.9% dive in Nvidia shares before stabilizing and finishing up 8.9% to $128.99 a few days later.

This occurred when DeepSeek’s R1 large language model which cost far less to "train" than ChatGPT rivaled its functional performance.

DeepSeek’s entrepreneur leader, Liang Wenfeng, launched his company beginning with a hedge fund firm: Jacobi; he created in it 2023 a few years after university graduation to use AI to find profitable trades in financial markets.

In 2015, he co-founded High-Flyer, another investment firm, with two college friends.

There’s perhaps little coincidence that Liang chose to launch his enterprises in the tech hub of Hangzhou, the same city where tech giant Alibaba is based, and home of rich technical talent.

China has roughly nine times as many engineers as the U.S. and perhaps 15 times as many science and technology graduates.

Liang’s team started building computing systems with Nvidia graphics-processing units in 2019. High-Flyer was among only a few Chinese companies to have Nvidia’s high-end chips by late 2022 when OpenAI first released ChatGPT.

These chips used in Nvidia graphics processing units, like the leading GPUs in U.S. AI data centers, are nearly all fabricated by the Taiwan Semiconductor Manufacturing Co.

Until recently, AI models that support conversational language programs such as OpenAI’s ChatGPT were trained on a vast compilation of text, images and other data applying specialized algorithms to find patterns that a chatbot could use to hold a conversation.

DeepSeek dramatically cut down processing time and power costs by training its models to hunt and focus data searches for answers to specific questions on different topics from a mixture of expert sources in appropriate fields rather than first review all information mixed with noisy commentary on the internet.

Delegating questions to a roster of recognized topical sources eases dependence on very advanced chips and reduces processing time and power requirements before questions are asked.

Although the shortcut uses more time and power in producing answers, drawing on specialized expert sources requires far less model training time at a fraction of the cost of competing models.

Meanwhile, with global AI development creating an insatiable demand and supply shortage of advanced chips as big tech pours mountains of cash into data centers, Nvidia is no longer the only supplier in town.

Amazon.com, Google and Microsoft have also been developing their own in-house chip designs to support their AI platforms, and Nvidia rivals Intel and Advanced Micro Devices are pushing their own AI-specific chips.

As the U.S. in recent years has tightened restrictions on shipments of chips to China in recent years, Nvidia responded by producing chips significantly less-powerful than their H800 for the Chinese market that met Washington’s restrictions, some of which DeepSeek said it used to develop its latest R1 model.

And as DeepSeek "do more with less" approach challenges previous assumptions regarding how much computing power and spending is needed for substantial AI advances, OpenAI, Oracle and SoftBank recently made headlines announcing an up to up to $500 billion Stargate joint venture infrastructure investment.

Microsoft plans to spend $80 billion on AI data centers this year, and Meta CEO Mark Zuckerberg says he plans to spend about $65 billion on AI projects this year and including a data center "so large that it would cover a significant part of Manhattan."

Meta also expects to have 1.3 million advanced chips by the end of this year, while DeepSeek’s R1 model will reportedly require as few as 10,000 to further develop.

Elon Musk’s SpaceX has agreed to invest $2 billion in xAI, a startup which is racing to catch up with OpenAI.

Earlier this year, he merged xAI with X, combining what was a small research lab with his social-media platform that helps amplify the reach of its Grok chatbot.

Writing in The Wall Street Journal, tech writer George Gilder opines that the key AI breakthrough now isn’t software, but rather a new era beyond microchips called wafer-scale integration that consolidates the essence of an entire data center on a single 12-inch wafer.

Cerebras, a U.S. company, has demonstrated "beyond chip" wafer-scale computing on about four trillion interconnected transistors through financing from G42, a tech company in the United Arab Emirates, and had planned on an initial public offering until it ran into resistance from the U.S. government based on possible links between China and the U.A.E.

Tesla’s Dojo system used for AI training applies vast accumulations of video data stored on "training tiles" from the cameras on Tesla’s automobiles which are interconnected across entire wafers.

Gilder observes that "since large language models such as DeepSeek and ChatGPT use unreliable internet data, they are inherently less likely to achieve intelligence in the real world than the pixel processors on Tesla’s Dojo tiles."

A further wafer-scale breakthrough by a Georgia Tech team led by a Dutchman, Walter de Heer reported in the journal Nature uses an ultra-thin layer of graphene atop a silicon carbide wafer that switches 1,000 times faster than silicon.

Again, as typical of many AI advancements, a major U.S. national security concern revolves around de Heer’s previous links along with his team’s Chinese student links to Tianjin University in China and alleged research connections to the Chinese military.

We certainly can’t afford to let China outsmart us at our own universities.

Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.

© 2025 Newsmax. All rights reserved.


LarryBell
DeepSeek’s entrepreneur leader, Liang Wenfeng, launched his company beginning with a hedge fund firm: Jacobi; he created in it 2023 a few years after university graduation to use AI to find profitable trades in financial markets.
ai, wafer-scale, integration
992
2025-47-21
Monday, 21 July 2025 03:47 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved