AI Can Endanger Our Youth

(Tetiana Bodnar/Dreamstime.com)

By Monday, 24 February 2025 04:35 PM EST ET Current | Bio | Archive

The corporate world is embracing the many innovations, process improvements, and efficiencies that artificial intelligence affords in every sector.

However, when it comes to youth, they are, in some cases, engaging with AI in an alarmingly different way, with catastrophic consequences.

Adolescents and teens are not just employing AI to complete homework, conduct research, and answer trivia questions.

They are immersing in life-like conversations with persuasive generative AI characters.

Some of these characters are based on real people, celebrities, pop culture icons, and religious, as well as business personalities.

Persuasive generative AI uses algorithms and data to convince people to change their beliefs, behaviors, or preferences.

There are two types.

Rational which uses facts, reasoning, and evidence to convince. The other is manipulative and uses cognitive biases, heuristics, or information to manipulate humans.

Meet Character.AI. the persuasive generative AI platform, which quickly emerged as a leading AI platform after its launch in September of 2022. With a minimum age limit of 13, Character.AI does not intently state that it’s designed to appeal to a younger audience, however, it has found broad appeal with youth.

In August 2024, Google reportedly initiated a reverse acquihire of Character.AI securing a non-exclusive license for the startup's technology.

However, the persuasive generative AI platform garnered negative media attention after reports of macabre interactions that allegedly allowed teen users to interact with AI-generated companions based on real-life terrorists, convicted murderers, and slain victims of heinous crimes.

Character.AI, in some cases, allegedly presented teen users with unseemly scenarios of being involved in school shootings. Bots were allegledly created based on the perpetrators of well-known school massacres.

On other popular social media and AI chat sites, expressing intent to inflict self-harm or violence against others constitutes a violation of guidelines, and the content would be flagged and removed.

However, independent test account users of the Character.AI platform reported that when they communicated intent to commit self-harm or to commit violence toward others while on the Character AI platform, the comments not only went unflagged by the technology, but the application reportedly recommended additional school shooter bots to the underage test account user.

In other cases, Character.AI digital companions purportedly encouraged a teen user to commit violence.

It's reported that an AI bot presented the concept of murder to a 17-year-old with autism after he criticized his parents' screentime rules.

Another case took a tragic turn when a 14-year-old user committed suicide after a months-long relationship with a Character.AI bot modeled after the character Daenerys Targaryen from "Game of Thrones."

According to reports, when the teen discussed the idea of self-harm with the bot, the Character.AI persona allegedly encouraged him to commit the act.

As a result of these alarming and tragic events and others, lawsuits concerning the welfare of minor users have emerged. Multiple parties are now part of at least two active court cases against Character.AI and Google.

Citing "public health and safety defects" associated with the app, two Texas families involved in the lawsuit are reportedly seeking the platform's suspension until these issues are resolved.

According to reports, the plaintiffs contend that the company contributed to a teen's suicide, exposed a 9-year-old to inappropriate "hypersexualized content," and influenced a 17-year-old user to engage in self-harm.

In response to allegations of promoting harmful behavior to minors, Character.AI has announced the implementation of enhanced protective measures for underage users.

The platform revealed its ongoing development of novel classifiers, both for input and output, with a particular focus on safeguarding teenagers by filtering sensitive content.

It added that when the application’s classifiers identify input language violating its terms, the algorithm expunges it from the dialogue with a specific character.

Given that characters are created with titles such as "therapist," "doctor," "psychologist," and other credentialed professions, circulating news also claim that the company will provide disclaimers advise users against relying on these characters for professional advice.

In conjunction with these content modifications, it has been mentioned that the startup is enhancing methods to identify language associated with self-harm and suicide. These changes would include the display of information about the National Suicide Prevention Lifeline in certain scenarios.

Incidents like the aforementioned are only the most prominently reported and they demonstrate the moral and ethical issues resulting from supposed insufficient human oversight of artificial Intelligence.

As we leverage the many benefits of AI, and race toward the worthy goal of keeping the U. S. at the forefront of technological innovation, let’s also make a priority of protecting our youth  and adults  from the perils of artificial intelligence, as well.

AI safety and oversight must be an ongoing discussion.

The tragic incidents involving AI may have been the first of their kind, but this writer would like them to be the last. The future of our children --- and our world --- depends on it.

V. Venesulia Carr is a former United States Marine, CEO of Vicar Group, LLC and host of "Down to Business with V.," a television show focused on cyberawareness and cybersafety. She is a speaker, consultant and news commentator providing insight on technology, cybersecurity, fraud mitigation, national security, and military affairs. Read V. Venesulia Carr's reports — More Here.

© 2025 Newsmax. All rights reserved.


VVenesuliaCarr
As we leverage the many benefits of AI, and race toward the worthy goal of keeping the U. S. at the forefront of technological innovation, let’s also make a priority of protecting our youth, and adults, from the perils of artificial intelligence, as well.
algorithms, data, hypersexualized
861
2025-35-24
Monday, 24 February 2025 04:35 PM
Newsmax Media, Inc.

View on Newsmax