Tech News & Trends - August 4, 2025 **NEW ISSUE**

Tech News & Trends1

Having trouble viewing this email? View it as a Web page.                                                                                                                                               

Subscribe to Tech News & Trends. 


To access some of the articles, you may need to be on the court's network.


Law & Tech

EU Rules for Most Powerful AI Models Go Into Effect: Explained (Bloomberg) "Companies building the largest AI models, including Google, Microsoft and OpenAI, have a new set of regulations to follow in the EU, starting Aug. 2. The world’s most comprehensive AI regulation went into effect earlier this year. The EU AI Act aims to protect consumers from the technology’s harms by focusing on risks to individuals—with higher-risk uses facing more obligations.... Provisions in the AI Act take effect in stages, and Aug. 2 marks the effective date of rules for general-purpose AI models—the large, powerful models that can be used for different uses, including underpinning generative AI chatbots."  

Mich. Judge Sanctions Attys for False Case Quotations (Law360) "A Michigan federal judge on Monday ordered plaintiffs' attorneys in two cases against a robotics company to pay for the time opposing counsel took in filing an additional briefing because of false case quotations. Responsive briefings by attorneys representing industrial companies Seither & Cherry Quad Cities Inc. and AP Electric Inc. were called into question due to inaccurate details for cases cited against defendants Oakland Automation LLC and Interclean Equipment LLC."  

Judges' AI Orders Keep Trickling in As Fake Citations Persist (Law360) "A handful of federal judges have issued orders or guidelines this year on the use of generative artificial intelligence in court filings as attorneys continue to get in trouble for submitting legal documents with fake case citations, according to a Law360 Pulse analysis. Two federal judges — Colorado U.S. District Judge John L. Kane and Pennsylvania U.S. Magistrate Judge José R. Arteaga — included generative AI certification requirements in their court policies and procedures, per Law360 Pulse's tracker."  

AI Legal Misinformation Is Hurting Injury and Bankruptcy Clients (Bloomberg) "Generative artificial intelligence has crossed the threshold from novelty to everyday tool. From the palms of their hands, personal injury claimants are asking chatbots how much their cases are worth, while stressed debtors seek advice on which type of bankruptcy to file. The answers they receive feel well researched and authoritative. But when AI-generated misinformation affects life-altering decisions, the damage can be swift and sometimes irreversible. Legal professionals bear a responsibility to protect clients increasingly influenced by fast, cheap, and often dangerous guidance."  

'This Verdict Is a Wake-Up Call:' Jury Trial Finds Meta Breached State Privacy Law in Class Action Against Fertility App (The Recorder) "A San Francisco federal court jury on Friday found Meta Platforms Inc. violated the California Invasion of Privacy Act in a landmark data privacy class action, which accused the Big Tech giant of illegally mining sensitive sexual and reproductive health data from Flo Health Inc., an app-based online fertility tracking platform."  


Security

Government Layoffs Are Making Us Less Safe in Cyberspace, Experts Fear (Nextgov/FCW) "When the Trump administration took office in January, it inherited a precarious cyber threat environment in which years of investments in defense had failed to curb the threat from Russia, China and other U.S. adversaries. Six months later, challenges faced by federal agencies are far worse — the result of a wave of layoffs and voluntary separations instigated by the Department of Government Efficiency, or DOGE.... The exits mark the first time in the digital era that the government’s cyber defense has grown worse rather than better, they say, endangering not just federal agencies but a trove of critical industry sectors that rely on cyber assistance from the U.S. government."  

States Have More Data About You Than the Feds Do. Trump Wants to See It. (New York Times  - may not be accessible to all readers; please ask your librarian for a copy) "As the Trump administration has sought to amass personally sensitive data on millions of individuals in America, it has run into one roadblock. The states, and not the federal government, hold many of the details Washington officials would now like to see....The Trump administration is now expanding its data push to this trove, reaching into domains long controlled by the states — and further into their residents’ lives."  

A Major AI Training Data Set Contains Millions of Examples of Personal Data (MIT Technology Review - may not be accessible to all readers; please ask your librarian for a copy) "Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found. Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. The study that details the breach was published on arXiv earlier this month."  


Privacy

Lawsuits Claim California Health Insurers Shared Private Data with Meta, Google and Others (The Recorder) "The insurance industry is facing a plethora of lawsuits accusing companies of allowing third-party trackers to collect policyholder data without consent. Many of the privacy class actions target health insurers, claiming these companies allowed private medical information to be shared with advertisers. Both Cigna and Blue Shield of California were hit with privacy violations for allegedly allowing companies like Meta to view personal medical data collected while using the insurers' websites."  

How Courts Navigate Data Privacy, Protection Challenges in Bankruptcy Cases (Legal Tech News) "Large amounts of data including personally identifiable information can be compromised in the event that debtors file for bankruptcy, and sell the information to pay back creditors. During the New York City Bar’s 'Bankruptcy and the Privacy Line: When Personal Information Becomes an Asset' panel, legal professionals discussed the evolving privacy landscape and the handling of personally identifiable information (PII) in bankruptcy cases."  

Sam Altman Warns There’s No Legal Confidentiality When Using ChatGPT As a Therapist (TechCrunch) "ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn’t yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there’s no doctor-patient confidentiality when your doc is an AI."  


Tech Tips

I Don't Send Emails Late Anymore, Thanks to This Outlook Web Feature (How-To Geek) "The emails I now send arrive in my recipients’ inboxes at 9 AM sharp. Not because I’ve started waking up early, but because I’ve learned to use the Schedule Send feature in Outlook Web."  

The Biggest Signs That AI Wrote a Paper, According to a Professor (Gizmodo) "Mark Massaro has taught English Composition at Florida Southwestern State College for years, but his job became significantly more difficult in 2023. Not long after AI apps like ChatGPT became freely available, higher education throughout the U.S. was hit with a tsunami of automated cheating.... Instead, Massaro says he must depend on his own wits to assess whether a paper was illegitimately conjured or not. To do this, he’s put together a checklist of tell-tale signs that a paper is AI.... He shared some of those red flags with Gizmodo...."  


Emerging Tech

Brace for Transcript Errors? Court Reporting in the Digital Age (Law.com) "In a case in Nevada, a court reporter misheard a routine legal phrase, and transcribed it as ‘a motion from hell.’ In so doing, the ordinary term—'countermotion to compel'—became a dramatic error. The Nevada incident, shared on LinkedIn by Las Vegas litigator Donna Wittig, was the court reporter's error and not a digital or artificial-intelligence, or AI, mistake. But observers say it underscores concerns about the future accuracy of transcripts, as courts increasingly begin to depend on digital recording and emerging technology like AI programs."  


Industry News

How China Is Girding for an AI Battle With the U.S. (Wall Street Journal - may not be accessible to all readers; please ask your librarian for a copy) "China is ramping up efforts to build a domestic artificial-intelligence ecosystem that can function without Western technology, as it steels itself for a protracted tech contest with the U.S. ... Many of the initiatives were on display at an AI conference that ended this week in Shanghai, which Chinese authorities used as a showcase for products free of U.S. technologies."  

Chasing Credibility: Is Harvey's Sales Pitch Working? (Law.com) "In 2022, Harvey burst onto the legal tech scene, quickly becoming the poster child for how generative artificial intelligence could reshape the legal profession.... Though the fanfare has quietened, Harvey's ambition has not. Here's how the AI pioneer has established itself as an industry leader through careful marketing to elite clients, building market credibility, shrewd pricing and successive funding rounds."  


Please contact your local librarian if you have trouble accessing any of these stories.

Have a story to recommend? If so, click here.  

Back to Top