The prudent warning to our children to beware of trusting strangers who say, "Hey, little girl (or boy), want some candy?" takes on increasing urgency as unmonitored artificial intelligence chatbots are invited into their bedrooms and classrooms as friends and confidants.
While coming across as authoritatively all-knowing, AI can't be trusted to offer guidance on matters of human values and intellectual judgment, offering no substitute for deeply personal parental wisdom and age-appropriate instructional reasoning.
Nor is addictive interaction with virtual AI friends a substitute for social learning experiences — albeit sometimes uncomfortable and discouraging — with flesh-and-blood playmates, competitors, and critics who teach vital lifelong coping and achievement skills.
First off, let's recognize that AI doesn't really think at all and that articulate and seemingly reasoned responses presented by large language models (LLMs) such as OpenAI's ChatGPT and Elon Musk's Grok in reply to questions are nothing more than a sophisticated form of mimicry.
As machines for noticing statistical patterns of words, images, and coding terms from vast digital libraries, they reveal relationships that reflect unreliable information along with biases from prevalent sources, and they sometimes make no real sense at all.
Take, for example, misleading climate science information introduced to students in leading eighth-grade-level textbooks I previously wrote about in this column.
These materials incorrectly assert that carbon dioxide — the gas essential for carbon-based life — is a pollutant derived from fossil fuels and is the primary factor controlling global temperatures, whereby warming is portrayed as universally and significantly harmful.
Houghton Mifflin Harcourt's teacher's guide suggests: "Ask students to list things in their daily lives that increase the amount of carbon in the air," recommending culprit causes such as burning fuels, generating electricity (needed for adding more electric vehicles?), eating beef (fewer flatulent cattle?), and breathing (too many people?).
LLM platforms are also prone to making up facts of their own while concealing deceptions.
As noted by Anthropic, a leading AI lab, "advanced reasoning models very often hide their true thought processes and sometimes do so when their behaviors are explicitly misaligned."
But there's arguably a bigger, more dangerous trust problem with faux AI relationships, as impressionable adolescents and young adults predictably become prone to developing psychological and emotional dependencies on chatbots as surrogate companions, confidants, and advisers on personal matters.
According to data from the Centers for Disease Control and Prevention (CDC), suicide is the second-leading cause of teenage deaths.
In 2023, 3.3% of adolescents aged 12 to 17 actually attempted suicide, with girls most likely to try, but boys four times more likely to succeed.
For some reason, suicide rates among youth ages 15-19 in rural areas, 15.8 per 100,000 people, is higher compared with urban areas with 9.1 per 100,000.
The CDC reported that suspected suicide attempts among adolescents ages 12-17 between early 2019 and 2021, before and during the COVID-19 pandemic, increased especially dramatically for girls, up 50.6%, and 3.7% for boys.
COVID-19 shutdown experiences logically indicate that immature young people who suffer from low self-esteem will be most vulnerably impacted through withdrawal into virtual AI looking-glass worlds void of adequate interaction and feedback from parents and peers necessary to forge independent thinking and healthy, confident identities.
While mature, self-aware adults can rationally filter out unwarranted personal criticisms and fears, youngsters more typically lack comparable abilities to separate them from absolute failures and threats.
On the other hand, Vaile Wright, senior director of healthcare innovation at the American Psychological Association, reports that OpenAI is rolling back its GPT-40 update because it was overly flattering and agreeable.
She observes, "We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher."
Ergo, there's little wonder then if naturally narcissistic, self-doubting, and popularity-obsessed youth increasingly turn to chatbot voice assistants for the praise and approval services they are broadly programmed to provide.
Meanwhile, as authoritatively convincing AI conversations and deepfake imagery become increasingly indistinguishable from the real thing, more people of all ages will retreat into fantasy lives they prefer rather than face realities they must address to develop coping and achievement skills to make life better for themselves and others.
Andrea Vallone, a research lead on OpenAI's safety team, is training ChatGPT to recognize signs of mental or emotional distress in real time and to develop ways to de-escalate problematic conversations.
And maybe most children are smarter than we give them credit for being.
In an opinion piece for The Epoch Times, Kay Rubacek wrote that University of Washington researchers recently asked a group of children ages 7 to 11 to solve a series of visual logic puzzles to test abstract reasoning, not memorization, and compared their answers to what generative AI tools such as ChatGPT produced.
While the AI confidently offered incorrect answers, the children more often spotted the flaws almost immediately, with some even "debugging" the rewording prompts, testing different versions, and analyzing patterns of failure.
As one 9-year-old summed it up: "AI just keeps guessing."
So perhaps most children who grow up with AI will learn where and how to apply it as a useful tool but not be taken in to trust it as a substitute for parental wisdom, analytical judgement, or active and caring relationships with human role models.
We might possibly even demonstrate that learning process by applying those same lessons to ourselves.
Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture and the graduate space architecture program. His latest of 12 books is "Architectures Beyond Boxes and Boundaries: My Life By Design" (2022). Read Larry Bell's Reports — More Here.
© 2025 Newsmax. All rights reserved.