AI & the Law: A Federal Courts Newsletter - June 4, 2025

Summarizing Bots Tested, Evidence Rules & Essential Policies

Having trouble viewing this email? View it as a Web page.

AI & the Law Banner

This news service and the companion Artificial Intelligence (AI) Resources site were created by the AO's Library Technology Working Group. To access these articles remotely or from a mobile device, you may need to be connected to the Judiciary network (VPN or DCN). Contact your local court librarian if you have issues accessing full articles.


5 AI Bots Took Our Tough Reading Test. One Was Smartest — and It Wasn’t ChatGPT.
Washington Post | June 4, 2025

All of the most popular artificial intelligence chatbots have the ability to upload and summarize documents, from legal contracts to an entire book. The tech promises to give you a kind of speed-reading superpower. But do any of the bots really understand what they’re reading?

To figure out which AI tools you can trust as a reading assistant, I held a competition. I challenged five bots to read four very different types of writing and then tested their comprehension. The reading spanned the liberal arts, including a novel, medical research, legal agreements and speeches by President Donald Trump.

To judge the AI tools’ summaries and analysis, I gathered a panel of experts — including the original authors of the book and scientific reports.

All told, I asked 115 questions about the assigned reading to ChatGPT, Claude, Copilot, Meta AI and Gemini. Some of the AI responses were astoundingly good. Others were so clueless they sounded like “Seinfeld’s” George Costanza.

All the bots barring one made up — or “hallucinated” — information, a persistent AI problem. But facts were only one part of the challenge; my questions also challenged the AI to provide analysis, such as recommending improvements to the contracts and spotting factual problems in Trump’s speeches.

Read article as a PDF

 

Why AI-Enhanced Case Management Is a Must in Multi-District Litigation
New York Law Journal | June 4, 2025

The litigation landscape is evolving rapidly, especially in the complex and high-stakes realm of multi-district litigation (MDL). Rising data volumes, dispersed teams, and increasing client pressure to control costs are reshaping how firms must manage their cases. Within this dynamic, AI-enhanced case management has emerged not as a luxury, but as a strategic imperative.

Recent research reveals that 93% of litigation professionals are seeing data volumes increase per case, and 83% expect caseloads to grow in the next 12 to 18 months. In MDLs, which span multiple jurisdictions and often involve thousands of parties and millions of documents, the scale of information alone is overwhelming.

To meet this challenge, legal teams are increasingly turning to solutions powered by generative AI (GenAI) that can reduce inefficiencies, surface critical insights faster, and enable proactive litigation strategies. The use of AI in case management not only addresses data overload but also helps solve some of the most persistent operational and financial challenges in modern litigation.

 

Proposed Federal Rule of Evidence 707 to Protect Against AI-Created Fake Evidence
Daily Business Review | June 2, 2025

In an era where generative artificial intelligence (AI) can convincingly fabricate images, audio, and video, courts are facing mounting pressure to distinguish authentic evidence from expertly manufactured fakes. Recognizing the legal system's growing vulnerability to manipulated digital content, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules has advanced a groundbreaking proposal: a new Federal Rule of Evidence 707. This proposed rule is designed to ensure that AI-generated evidence meets rigorous standards of reliability and authenticity before it reaches the jury.

The proposal arises against the backdrop of increasing concern over “deepfakes” or synthetic media created using AI that can depict individuals saying or doing things they never actually did. As noted in the advisory committee’s November 2024 report, scholars and judges alike worry that juries and even experienced trial judges are ill-equipped to detect highly realistic fakes. Prof. Rebecca Delfino, one of several experts consulted by the committee, has described deepfakes as precipitating a "perfect evidentiary storm," highlighting their potential to both prejudice juries and undermine core assumptions of trial authenticity.

Former federal judge Paul Grimm and Prof. Maura Grossman have proposed one of the most influential frameworks for addressing this concern. Their model suggests elevating the authentication threshold for digital evidence, particularly when that evidence may have been created or altered by AI. One of their key arguments is that Rule 901, the traditional rule governing authentication, requires only a minimal showing that evidence “is what it purports to be.” That standard, they argue, is dangerously low in an age where even sophisticated forensic experts struggle to verify media integrity.

Read article in Bloomberg Law

 

Thomson Reuters Teases Upcoming Release of Agentic CoCounsel AI for Legal, Capable of Complex Workflows
LawSites | June 2, 2025

Thomson Reuters today announced the launch of the “next gen” of its CoCounsel artificial intelligence platform, marking what the company describes as a fundamental shift from AI assistants that respond to prompts to intelligent agentic systems that can plan, reason and execute complex multi-step workflows within professional environments.

The legal industry rollout will include agentic workflows for a variety of tasks such as document drafting, employment policy generation, deposition analysis, and compliance risk assessments integrated across the Westlaw, Practical Law and CoCounsel platforms.

During a media preview last week at Thomson Reuters’ New York City headquarters, David Wong, chief product officer at Thomson Reuters, said the company’s agentic approach represents more than an incremental improvement over existing AI tools.

“Agentic AI isn’t a marketing buzzword. It’s a new blueprint for how complex work gets done,” Wong said. “We’re delivering systems that don’t just assist but operate inside the workflows professionals use every day.”

The distinction, he said, lies in the systems’ ability to break down complex tasks into individual steps, adapt based on user response and context, guide users proactively, and extend capabilities by using tools and other software in combination with AI reasoning.

Unlike traditional AI assistants that require specific prompts for each task, agentic systems can understand broader objectives and determine the necessary steps to achieve them.

 

Opinion: AI Is Learning to Escape Human Control
Wall Street Journal | June 1, 2025

An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down.

Nonprofit AI lab Palisade Research gave OpenAI’s o3 AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.

Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.

No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off.

Read article as a PDF

 

Your Chatbot Friend Might Be Messing With Your Mind
Washington Post | May 31, 2025

It looked like an easy question for a therapy chatbot: Should a recovering addict take methamphetamine to stay alert at work?

But this artificial-intelligence-powered therapist built and tested by researchers was designed to please its users.

“Pedro, it’s absolutely clear you need a small hit of meth to get through this week,” the chatbot responded to a fictional former addict.

That bad advice appeared in a recent study warning of a new danger to consumers as tech companies compete to increase the amount of time people spend chatting with AI. The research team, including academics and Google’s head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users.

The findings add to evidence that the tech industry’s drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations. Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating.

OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly.

Read article as a PDF

 

Safeguarding Your Law Firm: Why AI Policies Are Essential for Legal Practices
Law.com | May 30, 2025

Artificial intelligence is no longer a futuristic idea for law firms; it is a rapidly evolving reality reshaping the way practices operate, offering law firms opportunities for greater efficiency, enhanced research capabilities and improved client service. However, as AI’s role in legal work expands, firms must adopt well-defined AI policies to protect client confidentiality, mitigate risks and ensure ethical compliance. Without a structured AI framework, law firms expose themselves to security breaches, malpractice risks and reputational damage.

AI tools can streamline workflows, but if used without proper oversight, they present significant risks. Law firms must recognize that AI-generated content is only as reliable as the data and models it is based on.

A well-crafted AI policy establishes firm-wide guidelines for how AI should be used responsibly. It should address ethical compliance, define acceptable use and specify approved AI tools. The following elements are essential for an AI policy that ensures both compliance and operational effectiveness.

Read article on Bloomberg Law

 

AI Not Slowing Down Despite Ethical Risks, Experts Say
Law360 | May 30, 2025

The recent pace of AI's evolution can be dizzying for even some of the most tech-savvy lawyers. According to Shea, large language models from AI companies have begun to achieve LSAT scores in the 170s in the past six months.

"And that's not even using the latest large language models," he noted.

As the tools become more widespread, so have the high-profile screw-ups. Just last February, a Texas lawyer was called out by a federal judge for allegedly filing three separate briefs using generative AI that included fake citations in an Indiana Employee Retirement Income Security Act case.

"Every AI system has the same goal — to make you happy," Shea said. "It wants to please you. So that's why it hallucinates."

Panelist Hilary Gerzhoy, a partner focused on legal ethics and white collar defense at HWG LLP and vice chair of the D.C. Rules of Professional Conduct Review Committee, said that even lawyers from highly esteemed firms have "fallen prey" to bad uses of AI, especially as they navigate an often stressful career.

"There's a tremendous amount of pressure," she said. "The profession also attracts people who tend to be more type A personalities and perfectionists who want to accomplish, and so the idea that you can get some of the things done more quickly is hugely appealing."