
Cybercriminals are targeting lawyers with sophisticated tools and smarter tactics, from deepfake impersonations of firm leaders to AI-driven phishing and law firm data breaches.
In this issue, we unpack the latest risks facing legal organizations in 2025 and explore what your firm can do to respond with confidence.
AI in the Courtroom: Promise, Pitfalls, and the Urgent Need for Legal Literacy
Artificial intelligence is rapidly reshaping how law firms operate and serve clients. It helps streamline research, accelerate document review, and even enhance visual evidence.
But the promise of AI is clear; its pitfalls are becoming impossible to ignore.
As legal professionals embrace generative AI (GenAI) tools, a new crisis is emerging: a credibility gap that could shake the foundations of legal practice.
In a now-infamous Texas case, a lawyer submitted a legal brief generated by GenAI, one that cited non-existent cases and fabricated quotes. The judge, noting the total absence of fact-checking, issued a $2,000 penalty and mandated GenAI training.
Unfortunately, this isn’t an isolated incident.
Across the country, attorneys are unknowingly submitting hallucinated citations, made-up legal authorities, and unverified facts, all under the guise of AI-enhanced productivity.
A Crisis of Credibility in Legal Practice
According to analysts, AI hallucinations and deepfake evidence could undermine trust in court proceedings if not properly scrutinized. Plus, they can expose lawyers to costly cyberattacks.
Judges, lawyers, and even jurors are struggling to keep pace with a rapidly advancing technology that can simulate reality but doesn’t understand it.
Cases have already surfaced where courts were asked to consider AI-altered video evidence, only for judges to reject it due to its “novel” and potentially misleading nature.
The law’s current framework, such as Federal Rule of Evidence 901, offers a foundation for authenticating digital evidence, but experts say it’s far from sufficient in today’s AI era.
Moreover, the risks extend beyond attorneys.
Junior staff, paralegals, and clerks may not fully grasp the implications of inputting sensitive case data into commercial platforms like ChatGPT or Gemini, inadvertently triggering privacy violations or ethical breaches.
The Solution: AI Literacy, Ethics, and Training
AI can improve legal workflows, but responsible use begins with education. Legal experts are calling for:
- Mandatory AI ethics training for attorneys, judges, and legal staff
- Clear verification protocols for AI-generated documents and citations
- Ongoing continuing legal education (CLE) on emerging AI technologies
- Secure, privacy-compliant AI tools vetted for legal use
- In-house guidance on responsible data entry into third-party platforms
Why It Matters Now
Legal practitioners who fail to grasp the risks are prone to making procedural errors, which can lead to an erosion of the public’s trust in legal institutions. Without a strategic and ethical approach, AI could transform the courtroom from a place of truth to a hall of confusion.
Law Firm Data Breach Exposes Sensitive Health and Identity Records
Zumpano Patricios, P.A., a prominent Florida-based law firm, confirmed a significant data breach that may have compromised highly sensitive personal and health-related information of individuals across the country.
The breach is now under investigation by the data breach law firm Strauss Borrelli PLLC.
The incident highlights a growing concern in the legal sector: law firms, especially those handling protected health information (PHI), are increasingly becoming prime targets for cybercriminals.
On May 6, 2025, Zumpano Patricios detected unauthorized activity within its IT systems.
A forensic investigation later confirmed that an external attacker had gained access to and potentially exfiltrated a range of personally identifiable information (PII) and protected health information (PHI) stored within the firm’s network.
The compromised data may include:
- Full names
- Social Security numbers
- Medical provider names and health insurer details
- Member ID numbers
- Dates of service and billing details
- Clinical coding data
- Portions of medical records
While the full scope of the breach remains under review, Zumpano Patricios acknowledged that the attacker may have removed information belonging to an unknown number of individuals.
On July 3, 2025, the firm publicly disclosed the breach on its website and began notifying affected individuals.
Founded in 2003, Zumpano Patricios is known for its work in antitrust, corporate litigation, antiterrorism, and especially for representing healthcare providers in insurance-related disputes — a role that inherently involves handling a large volume of PHI.
With offices in Florida, New York, Illinois, Utah, and Nevada, the firm operates nationally, meaning the impact of this breach may extend far beyond state lines.
What Law Firms Should Be Asking Now
- Are our systems regularly monitored for unusual activity?
- Do we encrypt all sensitive PII/PHI — at rest and in transit?
- How quickly could we detect, contain, and report a breach?
- Are we offering staff ongoing cybersecurity awareness training?
- Have we secured professional liability insurance that covers cyber incidents?
Facial Fakes and Fraud Calls: Why Law Firm Leaders Are the Newest Deepfake Targets
In 2025, deepfake technology is a growing cybersecurity threat aimed directly at your law firm’s most trusted faces: executives, managing partners, and senior attorneys.
From fake video calls to cloned voices used in financial fraud, deepfakes have become more precise, faster to generate, and disturbingly believable. And the attackers know exactly who to target.
Why Managing Partners Are the New Bullseye
Cybercriminals no longer need to access your systems to cause damage. Instead, they just need a few minutes of your managing partner’s voice from a podcast, or a video clip from a conference.
Using this content, AI tools can generate highly convincing deepfake voices or video impersonations that can be used to:
- Authorize fake wire transfers
- Instruct staff to share confidential files
- Manipulate clients or co-counsel
- Spread false public statements or filings
What makes this tactic even more dangerous? It bypasses your traditional firewalls. This is social engineering 2.0, and it looks and sounds exactly like the people you trust.
Real Cases, Real Law Firm Risks
In recent months, companies in the U.S., Hong Kong, and U.K. have reported attempted fraud involving deepfaked video calls of firm leadership instructing finance staff to move client funds.
For example, in Hong Kong, a finance employee was deceived into transferring over $25 million after being duped by a deepfake video conference showing the firm’s CFO and colleagues
Imagine the reputational and legal damage if confidential case files or trust account funds were compromised because someone mistook a fake video for a real instruction.
Why Deepfakes Are So Hard to Detect Now
Deepfake detection used to rely on telltale signs: stiff eye movement, awkward lip sync, or strange lighting. But in 2025, the tools used to generate fakes have outpaced many of the tools used to catch them.
Attackers are also combining deepfakes with AI-written scripts and real-time phishing tactics, making them harder to dismiss as obvious scams.
What Law Firms Can Do Today
- Train for Executive Deepfake Scenarios
Go beyond traditional phishing drills. Create tabletop exercises where staff must respond to deepfake video or voice attempts. Teach teams to verify instructions via secondary channels before taking any action. - Lock Down Public Media
Limit the amount of video or audio of partners available online. Remove old interviews, webinars, or promotional content if it’s no longer necessary. - Implement Voice and Video Verification Protocols
Use code words, multi-party approvals, or private Slack confirmations before acting on any voice or video-based request, even if it seems urgent. - Use Deepfake Detection Tech — Carefully
Tools like Intel’s FakeCatcher or Microsoft’s Video Authenticator can help, but don’t rely on them alone. Detection is improving, but human skepticism is still your strongest filter. - Communicate the Risk to Clients
If your firm handles sensitive or high-value client matters, consider including deepfake impersonation as part of your security disclosure and risk discussions. Some clients are already being targeted directly.
What’s Next? Deepfake-for-Hire
With deepfakes-as-a-service tools available on the dark web, even low-level scammers can now create convincing impersonations of your senior attorneys. This means every law firm — large or small — is now at risk.
The best defense starts with awareness, policy, and preparing your team to question the face they trust the most.
Reputation, client trust, and financial security are all at stake. As malicious actors hone their tools and tactics, law firms must strengthen their defenses.
Check out our website for more information on how our cybersecurity experts can help protect your law firm from the evolving threats.
Share this newsletter with your managing partners, IT team, and compliance lead because in today’s threat landscape, cybersecurity is a firm-wide responsibility.
Best regards,