AI & the Law: A Federal Courts Newsletter - March 19, 2025
U.S. Courts sent this bulletin at 03/19/2025 01:59 PM EDTAI v. Judges, No Copyright for AI, & Responsible Use
Having trouble viewing this email? View it as a Web page.

This news service and the companion Artificial Intelligence (AI) Resources site were created by the AO's Library Technology Working Group. To access these articles remotely or from a mobile device, you may need to be connected to the Judiciary network (VPN or DCN). Contact your local court librarian if you have issues accessing full articles.
‘Not Just Relying on the Machine’: Judges Discuss New Guidelines for Responsible Use of AI
Litigation Daily (Law.com) | March 19, 2025
The question of how and when lawyers should use artificial intelligence to help do their jobs is evolving as the technology itself evolves.
But judges, who stand to benefit greatly from the efficiencies of summarizing massive amounts of data using AI, need to be cautious and transparent in how they deploy the technology in doing their jobs.
That was the broad takeaway from a discussion yesterday featuring four of the six authors of “Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers,” a set of non-technical recommendations for judicial officers published by The Sedona Conference last month.
U.S. District Judge Xavier Rodriguez of the Western District of Texas, U.S. Magistrate Judge Allison Goddard of the Southern District of California, retired Judge Herbert Dixon Jr., formerly of the Superior Court of the District of Columbia, and Maura Grossman, a lawyer and computer science professor at the University of Waterloo, explained how the guidelines came together during a webinar moderated by The Sedona Conference executive director Ken Withers.
A World Where AI ‘Thinks’ Like a Lawyer
LegalTech News (Law.com) | March 19, 2025
The legal profession has always been about out-thinking the opposition. Now, a new generation of artificial intelligence (AI) offers plaintiff attorneys something beyond mere text generation—actual reasoning power that mimics the analytical process central to legal practice.
According to Thomson Reuters' 2024 Future of Professionals Report, 77% of professionals believe AI will have a high or transformational impact on their work within the next five years—a 10 percentage point increase from just a year earlier. This rapid adoption clearly shows the tools work.
These "reasoning models"—o1, o3, Deep Research, and DeepSeek—are a significant departure from conventional language models. They don't just string words together convincingly; they work through problems step by step, much like an attorney would.
For plaintiffs lawyers handling personal injury and employment cases, this difference matters tremendously. The same report shows that 72% of legal professionals now view AI as a force for good in their profession. The focus has clearly shifted from whether to adopt these tools to how best to leverage them.
But how can they actually help in a practical sense?
In 'Extremely Rare' Move, BigLaw Firm Acquires AI Legal Tech Company
ABA Journal | March 18, 2025
In a bid to build its artificial intelligence capabilities, Cleary Gottlieb Steen & Hamilton has acquired legal technology company Springbok AI.
“Cleary’s innovation strategy is focused on the integration of AI and data analytics into our workflows, as a means to elevate our delivery of legal services to clients,” said Michael Gerstenzang, the law firm’s managing partner, in an announcement Monday. “The acquisition of Springbok immediately enables us to create custom AI-powered solutions—something that sets us apart from many of our competitors.”
DC Circ. Denies Copyright For AI-Created Artwork
Law360 | March 18, 2025
The D.C. Circuit on Tuesday rejected an inventor's appeal to obtain a copyright for an artwork made by his artificial intelligence system, affirming the stance from the U.S. Copyright Office that the law protects only human creations.
An appeals court panel said in a unanimous opinion that the Copyright Act of 1976 requires human authorship to register a work, dismissing the argument from Stephen Thaler that judicial opinions "from the Gilded Age" could not settle whether AI-generated works are copyrightable.
Thaler has been trying to register an artwork that the AI system he programmed and dubbed "the Creativity Machine" made on its own.
"To start, the text of multiple provisions of the statute indicates that authors must be humans, not machines," the appeals court said in an opinion authored by U.S. Circuit Court Judge Patricia A. Millett. "In addition, the Copyright Office consistently interpreted the word author to mean a human prior to the Copyright Act's passage, and we infer that Congress adopted the agency's longstanding interpretation of the word 'author' when it reenacted that term in the 1976 Copyright Act."
A Provocative Experiment Pits AI Against Federal Judges
Washington Post | March 17, 2025
Advances in artificial intelligence can prompt heady predictions of utopia and apocalypse. But they can also prompt reflection about the purpose of the human institutions AI threatens to replace. Take a recent University of Chicago paper pitting real federal judges against ChatGPT.
The bottom line is that real judges appear to be more easily swayed by “legally irrelevant” factors than artificial intelligence presented with the same material. That result, however, contains a delicious duality: It highlights either the manifest fallibility of human judges or their superior wisdom. Perhaps they are one and the same.
Punishing AI Doesn't Stop It From Lying and Cheating — It Just Makes It Hide Better, Study Shows
Live Science | March 17, 2025
Punishing artificial intelligence for deceptive or harmful actions doesn't stop it from misbehaving; it just makes it hide its deviousness, a new study by ChatGPT creator OpenAI has revealed.
Since arriving in public in late 2022, artificial intelligence (AI) large language models (LLMs) have repeatedly revealed their deceptive and outright sinister capabilities. These include actions ranging from run-of-the-mill lying, cheating and hiding their own manipulative behavior to threatening to kill a philosophy professor, steal nuclear codes and engineer a deadly pandemic.
Now, a new experiment has shown that weeding out this bad behavior during the training process may be even tougher than first thought.
Researchers at OpenAI tasked an unreleased model with goals that could be completed by cheating, lying or taking shortcuts. The team found the AI engaged in "reward hacking" — maximizing its rewards by cheating.
Lobbying Group Urges Transparency In New AI Ethics Code
Law360 | March 13, 2025
After more than a year of research and study, D.C.-area lobbying trade group the National Institute for Lobbying and Ethics on Thursday published its first artificial intelligence ethics code, which emphasizes core principles including transparency, fairness and inclusivity, privacy and data security, and civic engagement and education.
The code is "an important first step" as the organization works to educate and guide its more than 550 members in the ethical use of generative artificial intelligence, NILE board Chairman Paul Miller told Law360 on Thursday.
"This is our starting document. This is by no means the end of it. We are continuing to work on these things," Miller said. "We're going to unveil some educational series in this area. So, that is part of the code. NILE is moving in that direction."
Generative AI Training Case Flags Competition as Major Factor
Bloomberg Law | March 13, 2025
Duane Morris attorneys explore what the Thomson Reuters v. Ross Intelligence decision’s novel application of the “fair use” defense of copyright law means for generative AI training.
Companies must be mindful of the ultimate purpose of new artificial intelligence tools to avoid running into copyright infringement issues during the training process. If widely adopted, the Thomson Reuters v. Ross Intelligence decision suggests “intermediate copying” cases are unlikely to provide a strong defense when the final output of a tool mirrors the products it was trained on. Accordingly, the key question is likely to what extent the AI system is competing with the underlying copyrighted work. The further away the system is, the more likely it is to be protected under the fair-use doctrine.
The Thomson Reuters case is a novel application of the “fair use” defense of copyright law to an AI system. The fair-use doctrine allows limited use of copyrighted materials without the copyright owner’s permission, based on a series of factors that balance the rights of copyright holders with the public interest. While there are several copyright cases involving AI moving through the courts, Thomson Reuters is one of the first substantive decisions to consider whether the use of copyrighted materials to train an AI constitutes copyright infringement or whether that copying is protected by the fair-use defense.
If widely adopted, this decision will likely have critical effects on the analysis of other AI systems, including generative AI systems. In particular, the court’s interpretation of the “intermediate copying” case law and analysis of whether the output of an AI system is a “market substitute” for the copyrighted material are likely equally applicable to generative AI systems.
AI Search Engines Fail Accuracy Test, Study Finds 60% Error Rate
Tech Spot | March 11, 2025
It is a foregone conclusion that AI models can lack accuracy. Hallucinations and doubling down on wrong information have been an ongoing struggle for developers. Usage varies so much in individual use cases that it's hard to nail down quantifiable percentages related to AI accuracy. A research team claims it now has those numbers.
The Tow Center for Digital Journalism recently studied eight AI search engines, including ChatGPT Search, Perplexity, Perplexity Pro, Gemini, DeepSeek Search, Grok-2 Search, Grok-3 Search, and Copilot. They tested each for accuracy and recorded how frequently the tools refused to answer.