Skip to main content
Tags: chatbot | eeg | llm
OPINION

Is Smart Technology Making Us Dumber?

artificial intelligence and or robotics versus the human factor

(Pop Nukoonrat/Dreamstime.com)

Larry Bell By Friday, 15 August 2025 12:56 PM EDT Current | Bio | Archive

A recent MIT study along with a follow-up by some literary educators at Arizona University’s Teachers College essentially appear to suggest that AI technology such as chatbots using large language models (LLMs) are leading to lazy, shallow thinking which sidetracks a necessity to develop intellectually probing minds.

Are they right?

Or maybe, is AI also enabling us to let it do much of the distracting busy work to enable freeing our minds for larger intellectual pursuits?

Might both hypotheses be true as cautionary possibilities worthy of cognitive attention?

The MIT research which explored neural and behavioral consequences of LLM-assisted student essay writing applying electroencephalography (EEG) to assess cognitive engagement indicates we may be sacrificing cognitive capacity and creativity for short-term convenience.

The students were comparatively divided into three study condition groups: one using ChatGPT, a second using Google for research, and the third drawing exclusively on individual logic and reasoning (no tools).

Each group completed three sessions under the same condition, while in the fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM).

Notably, 83% of those who used ChatGPT to draft their work couldn’t remember a single sentence written within just minutes before.

Based on monitoring EEG brain activity, the ChatGPT users showed significantly decreased neural engagement, while brain-only writers generated nearly double the number of connections in the alpha frequency band associated with attention and creativity.

In the theta band related to memory formation and deep thinking, the gap was even greater: 62 connections for brain-only writers versus 29 for AI.

Steven Graham, a Regents and Warber professor in the Division of Leadership and Innovation at Arizona State University's Teachers College, characterizes this gap phenomenon as "cognitive debt."

Observing that some ideas are difficult and hard to get a handle on and thus requiring that we engage at different levels, dependence on smart machines to do this deep thinking for us deprives us of the personal learning benefits with convenience dampening the fire of creativity and possibility of reasoning.

Graham's English teachers who then reviewed the essays blind regarding which were AI generated described the ChatGPT work as having "close to perfect use of language and structure, while simultaneously failing to give personal insights or clear statements."

The teachers also described the ChatGPT essays as "soulless" because many sentences were empty of content, lacking "personal nuances" associated with individual thinking.

Is this, and will it continue to be, a worsening societal pattern?

Writing in RealClearScience, Bruce Abramson, director of the American Center for Education and Knowledge, doesn’t appear optimistic.

Abramson observes, "Today, information is overabundant. No one needs to know anything because the trusty phones that never leave our sides can answer any question that might come our way.

"Why waste your time learning, studying, or internalizing information when you can just look it up on demand?"

As Abramson points out, whereas in 2011 an estimated one-third of Americans and one-quarter of American teenagers- had smartphones, more than 90% of Americans and 95% of teenagers currently have them.

Few of today’s college students have ever operated without ability to scout ahead or query a "smart" device for information on an as-needed basis.

I received universally vacant looks when recently asking some of my undergraduate students if they knew the purpose of a slide rule, the simple tool that my generation of architects originally used prior to pocket calculators for structural and other numerical determinations.

Abrahamson concludes that having "outsourced knowledge, comprehension, and judgement to sterile devices easily biased to magnify public opinion," we’ve raised a generation incapable of deep understanding.

At age 87, having and continuing to observe several generations of my own students, I strongly disagree, believing instead that such pessimism is entirely baseless.

After all, if none of them know what a slide rule was, they may not really be missing out on all that much: the cognitive attention spent calculating beam, column, and truss strength for a particular application when that time and thought might have been put to better use exploring and comparing other structural options.

Here, let's differentiate between cognitive pattern recognition and intellectual purpose directed "vision."

Just as Steven Graham’s teachers reviewing MIT’s ChatGPT-generated essays found them stale and pointless, we shouldn’t expect or rely on menu derivative large language models to replace human "generative intelligence" which established goals and purpose.

Despite their ability to solve some of the world’s most complex math problems and convincingly simulate human relationships, these systems can only extrapolate in noticing patterns of words, images or coding terms ingested from vast digital libraries.

All large language search and pattern-marching models are designed to guess what word, or portion of a word, is most likely to come up in an answer, and in the process, regularly get facts wrong, invent them, jumble them, and yes, sometimes entirely make stuff up.

These wrong and sometimes senseless answers are known as "hallucinations" because AI apps like ChatGPT and Gemini express them with authoritative confidence without honestly answering that they don’t know much like clueless students taking multiple-choice tests.

AI can stimulate us to inquire and learn, introduce us to new patterns of possibilities, undertake enormously complex calculations in short order, and even correct our imperfect grammar.

But depending on them to think for us?

Only if we’re inhumanly dumb enough to allow that to happen.

Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.

© 2025 Newsmax. All rights reserved.


LarryBell
AI can stimulate us to inquire and learn, introduce us to new patterns of possibilities, undertake enormously complex calculations in short order, and even correct our imperfect grammar. But depending on them to think for us?
chatbot, eeg, llm
947
2025-56-15
Friday, 15 August 2025 12:56 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved