Two federal judges have admitted they issued rulings that were botched by the use of artificial intelligence, resulting in embarrassing mistakes.
Senate Judiciary Committee Chair Chuck Grassley, R-Iowa, released responses Thursday from U.S. District Judges Henry T. Wingate of Mississippi and Julien Xavier Neals of New Jersey, as well as from Administrative Office of the U.S. Courts Director Robert Conrad, concerning the judges’ use of AI to draft court orders that contained significant errors.
Both judges acknowledged that their staff had used generative AI to prepare draft orders that included incorrect citations, references to individuals not involved in the cases, and fabricated quotations attributed to defendants.
“Honesty is always the best policy,” said Grassley in a release.
“I commend Judges Wingate and Neals for acknowledging their mistakes, and I’m glad to hear they’re working to make sure this doesn’t happen again,” he said. “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law.”
“The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy,” Grassley concluded.
Following the Senate inquiry, both judges implemented new procedures to prevent similar errors.
Wingate now mandates a second independent review for all draft opinions, orders, and memos, and requires cited cases to be printed and attached to the final drafts.
Neals has instituted a written policy prohibiting clerks and interns from using AI in drafting judicial documents, along with a multilevel opinion review process.
The AO informed Grassley that it had created an advisory AI Task Force, which issued interim guidance on July 31, 2025. The guidance offers “general, non-technical suggestions” permitting the “use of and experimentation with AI tools,” including consideration of “whether the use of AI should be disclosed.”
The AO noted that the recommendations are temporary and will be refined as more comprehensive policies are developed.
The Senate Judiciary Committee holds oversight authority over the federal courts, judges, and judicial proceedings.
Grassley said the errors linked to AI use in these cases raise questions about the accuracy and reliability of judicial decision-making.
Legal experts and AI developers, including ChatGPT, emphasize that artificial intelligence should only be used in the legal field as a supplemental research or drafting tool, never as a substitute for professional judgment or factual verification.
In judicial or legal contexts, AI output must be independently reviewed to ensure accuracy, relevance, and compliance with ethical and procedural standards.
Jim Mishler ✉
Jim Mishler, a seasoned reporter, anchor and news director, has decades of experience covering crime, politics and environmental issues.
© 2025 Newsmax. All rights reserved.