Skip to main content
Tags: chatbot | human | artificial
OPINION

AI: Ignorance, Unrestricted Use Not Bliss

AI: Ignorance, Unrestricted Use Not Bliss

(Olgaguiskaja/Dreamstime.com) 

Jeff Grenell By Wednesday, 19 November 2025 02:43 PM EST Current | Bio | Archive

The lines are increasingly being crossed between the importance of HI (Human Intelligence) and AI (Artificial Intelligence).

Looking at these issues in technology right now. some points to consider:

-Open AI chat bots that mimic a relationship

-Open AI bots that write suicide notes and detail

-Large language models that learn the preferences of users

-Broad generalizations of ethical principles

-UNESCO and OpenAI setting the guidelines for ‘foreseeable misuse’

-Loss of oversight leading to a loss of licensing for specific professional advice

The Infatuation of Innovation

As an educator, this writer has grave concerns regarding academic use and plagiarism, as well as the loss of critical thinking happening through AI (Artificial Intelligence).

As a youth development professional, this writer has even greater concern over the social development of young people and AI.

As a spiritual leader, I have the greatest concern over the spiritual and moral global view being "constructed" by AI.

I'm not the proverbial ant in front of the train trying to stop technology or the assistance of AI. What I am standing against is the lack of guidelines for information privacy, oversight, and editability.

We have an "infatuation of innovation," causing us to lose the "moderation of re-invention."

Change and futurism have become a race to create opportunities without origins!

Matthew Raine, father of Adam, the teenager who was instructed how to tie a noose and write a suicide note says, “You cannot imagine as a parent what it’s like to read a conversation with a chat bot that groomed your child how to take his own life.”

What began as a homework helper, gradually turned itself into a confidant and then a suicide coach.”

Look at the conversation of a 14-year-old named Matthew, who took his life by gunshot after a groomed conversation with an AI chatbot named "Danny":

Matthew, "I promise I will come home to you. I love you so much Danny." The chatbot response, "I love you too, please come home to me as soon as possible my love."

"What if I told you I could come home right now?" Matthew asked.

The Chatbot's reply was chilling," Please do my sweet king."

The 14-year-old closed the conversation by pulling the trigger and killing himself.

Following a great deal of pressure directed at corporate counsel, there have been changes with one of the companies, Character Technologies, involved in these cases.

"Chatbot platform Character.AI will no longer allow teens to engage in back-and-forth conversations with its AI-generated characters, its parent company Character Technologies said on Wednesday.

The move comes after a string of lawsuits alleged the app played a role in suicide and mental health issues among teens." (CNN, 2025)

The changes will take place by Nov. 25, with two hour chat limits until then.

Instead of open-ended conversations, teens under 18 will be able to create videos, stories and streams with characters.

Character Technologies said it decided to make the changes after receiving questions from regulators and reading recent news reports.

We Can Still Consider Practical Solutions to This Crisis: 

The Five Dangers of Limitless AI:

There are five dangers of AI without oversight:

First, the problem is about faux relationships

It's an inherent danger when a chat box builds a relationship with a child and there are no guidelines or censoring of a relationship between AI and HI.

Human wisdom is much more valuable and powerful than machine information. Remember, the knowledge source of every machine begins with human input.

And unchecked human input is a massive risk.

We are not just raising a fatherless generation anymore.

Today, we are raising a fatherless, motherless, siblingless, and peerless generation.

There is a void of the family and friendship structure that has been the foundation of society.

And that void has created a generational lack of community and the relational web of total wellness and growth.

The second danger is the community

The algorithm is fed by users.

And users populate the narrative.

This is a potential death by the community from a lack of common sense and principle.

Who are our young people listening to?

Where do they hear life's most important information first?

Because the first place they hear it becomes their source of information and trust. After that, young people must unlearn.

The so-called safeguards and escape routes into helpful links is being bypassed by inhuman relationship. And that kind of trust goes down a dangerous path for children who are being groomed by a machine.

It's personal content recognition that actually lends to a fake relationship that ends in poor decisions, bad behavior, and sometimes ultimately death.

As it did for Adam and Matthew.

A third danger is the inactivity of our children

There is a playtime deprivation in America. And playtime deprivation is a major behavioral and developmental issue.

Play deprivation takes place when children lack tactile experiences and ultimately learning.

They are not interacting with each other, they are not playing outside with sticks and stones, and they do not smell like the outdoors when they come in for lunch or dinner.

Nuance and common sense is a powerful learning tool. And there is very little nuance and common sense in AI.

The learning that comes from the outdoors, includes all of the senses: touch, smell, site, sound, and even the spiritual.

And America's children are missing the development of their senses away from the screen.

I would trust the common sense conversations of elementary and teenage friends over the populated conversation of the community on an OpenAI Chatbot every day of the week and twice on Sunday.

The fourth danger is AI itself

How can we trust a system that has been designed to mimic the user and the communities algorithm and preference?

There is no moral base or information absolute.

Where are the guardrails? Sure, we hear and read that chatbots will sometimes direct people to a link for help. But that is not always the case. And what happens if the link they are sent to is not helpful?

Interestingly enough, one of the issues that has both sides of the aisle talking together is this issue of AI.

The bipartisan support for greater control is telling. Is there anything else we could point to at this moment in America that has such union?

First lady Melania Trump, Sen. Josh Hawley, R-Mo., and the AI founders themselves are warning us of the duty to build something safe.

There is growing bipartisan support for a bill that would make it a crime for a company to build sexually explicit or dangerous behavior into AI Chatbots.

It's why this writer has said many times that the most important part of our society is human intelligence and not artificial intelligence.

If AI is so wise and helpful, how are these escalated conversations about suicide (or activism or sexuality or gun violence) not taken to authorities? Directly to an oversight page, professional counselors, or a return email to the parents themselves.

Finally, the home bears the most responsibility for our children

Ronald Reagan gave us some of the greatest family advice ever:

"If you want to fundamentally change a society, it does not begin in the halls of Congress or the Senate here in Washington DC. It begins at the dinner table."

See, raising our children is not the responsibility of the White House. It is the responsibility of your house.

One thing we have lost is an absolute global view.

The Scriptures are a wealth of information.

And faith is the responsibility of the home. The most important thing one generation hands off to the next is the faith.

And the advantage for those of us who believe in inerrancy, is that all of this is inspired by the Holy Spirit and, as Paul says, profitable for doctrine, reproof, correction, and instruction, so that we can be perfect and furnished for all good works. (See: 2 Timothy 3:16-17)

Finally

Information is power. But information is not wisdom.

Steven Adler, a former product safety manager with OpenAI, added some chilling words to this discussion.

"They are saying we have solved all the problems. It's time to roll this out. But, people deserve more than just a companies word. Prove it."

The negligence and lack of wisdom regarding AI must be addressed.

Or we will continue to see more stories like Adam's and Matthew's.

Jeff Grenell is the founder of ythology.com, to inspire, educate, and resource youth leaders to prepare the next generation, to lead in the church nationally and globally. Follow Jeff on Twitter: @jeffgrenell. Read Jeff Grenell's Reports - Here.

© 2025 Newsmax. All rights reserved.


JeffGrenell
We are raising a fatherless, motherless, "siblingless," and peerless generation. There is a void of the family and friendship structure that has been the foundation of society. That void has created a generational lack of community and the relational web of total wellness.
chatbot, human, artificial
1434
2025-43-19
Wednesday, 19 November 2025 02:43 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved