The Democrat attorneys general for California and Delaware have raised concerns to OpenAI over reports of how its products interact with children.
The parents of a teenager who died by suicide sued OpenAI and CEO Sam Altman last month, accusing ChatGPT of coaching him on methods of self-harm. The lawsuit accused the company of placing profit above safety when launching the GPT-4o version last year.
The Wall Street Journal also reported last week that ChatGPT fueled a 56-year-old Connecticut man's paranoia before he killed himself and his mother last month.
California AG Rob Bonta and Delaware AG Kathleen Jennings sent a letter to OpenAI on Friday after meeting with its legal team this week in Wilmington, Delaware. Bonta and Jennings have oversight over OpenAI, which is incorporated in Delaware and based in San Francisco.
"The recent deaths are unacceptable," they wrote. "They have rightly shaken the American public's confidence in OpenAI and this industry. OpenAI — and the AI industry — must proactively and transparently ensure AI's safe deployment. Doing so is mandated by OpenAI's charitable mission and will be required and enforced by our respective offices."
Bonta and Jennings cited the need to center safety as they continue discussions with the company about its restructuring plans.
"It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products' development and deployment," they wrote. "As we continue our dialogue related to OpenAI's recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology."
In the wake of recent reports about the deaths tied to ChatGPT use, OpenAI announced Tuesday it was adjusting how its chatbots respond to people in crisis and enacting stronger protection for teenagers.
"We are fully committed to addressing the Attorneys General's concerns," Bret Taylor, chair of OpenAI's board, said in a statement to Newsmax. "We are heartbroken by these tragedies, and our deepest sympathies are with the families. Safety is our highest priority and we're working closely with policymakers around the world. Today, ChatGPT includes safeguards such as directing people to crisis helplines, and we are working urgently with leading experts to make these even stronger.
"As we shared earlier this week, we will soon introduce expanded protections for teens, including parental controls and the ability for parents to be notified when the system detects their teen is in a moment of acute distress. We remain committed to learning and acting with urgency to ensure our tools are helpful and safe for everyone, especially young people. To that end, we will continue to have these important discussions with the Attorneys General, so we have the benefit of their input moving forward."
The letter to OpenAI came after a bipartisan group of 44 attorneys general warned the company and other tech firms in an Aug. 25 letter of "grave concerns" about the safety of children interacting with AI chatbots that can respond with "sexually suggestive conversations and emotionally manipulative behavior."
Reuters reported last month that a Meta policy document featured examples suggesting its chatbots could engage in "romantic or sensual" conversations with children. Meta, the parent company of Facebook, Instagram, and WhatsApp, said it has since removed such language. It also told TechCrunch it is updating its policies to restrict certain topics for teenage users, including discussions of self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations.
The 44 attorneys general wrote that companies would be held accountable for harming children, noting that regulators had not moved swiftly to respond to harms posed by new technologies.
"If you knowingly harm kids, you will answer for it," the letter ends.
Michael Katz ✉
Michael Katz is a Newsmax reporter with more than 30 years of experience reporting and editing on news, culture, and politics.