High Tech Friday: The AI Privilege Question Just Got Answered. Sort Of.

W. Matthew Dodge's avatarPosted by

[By Kyana Givens]

AI tools are in our offices whether we have a policy for them or not. Someone on your team used one this week, maybe on an active matter, maybe with client details in the prompt. That is not an indictment of anyone. It is just the reality of where legal practice is right now, and two federal decisions issued on the very same day have given us enough to actually do something useful with that reality.

Let’s walk you through what happened, and explain what it means for how we practice.

Same Day, Same Question, Opposite Answers.

United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026)

Bradley Heppner, former chairman and CEO of GWG Holdings, is facing federal charges including securities fraud, wire fraud, conspiracy, making false statements to auditors, and falsification of records tied to an alleged scheme to defraud investors of approximately $150 million. After receiving a grand jury subpoena and retaining counsel at Quinn Emanuel, Heppner used the consumer, non-enterprise version of Anthropic’s Claude on his own initiative, without direction from his attorneys, to prepare 31 documents outlining defense strategy and potential legal arguments. He fed information he received from his lawyers into the tool, generated the documents, and then transmitted them to counsel. When the FBI executed a search warrant at his residence and seized his devices, the government moved to compel production of those documents.

On February 10, 2026, Judge Jed Rakoff ruled from the bench that the documents were protected by neither attorney-client privilege nor the work product doctrine. He followed that ruling with a written opinion on February 17. The reasoning hit on three points. First, Claude is not an attorney. No privilege can attach to communications with a platform that holds no law license, owes no duty of loyalty, and cannot form an attorney-client relationship. Second, Heppner had no reasonable expectation of confidentiality because Anthropic’s consumer privacy policy expressly permits the company to use inputs to train its models and disclose data to third parties, including governmental regulatory authorities. Users consent to that policy when they create an account. Third, the documents were not prepared at the direction of counsel, which is fatal to a work product claim. Defense counsel conceded at oral argument that Heppner created them of his own volition. Judge Rakoff noted that if counsel had directed Heppner to use the tool, the analysis might have been different.

There is one more wrinkle worth flagging. Because Heppner fed information he received from his attorneys into Claude, the government argued, and Judge Rakoff agreed, that this disclosure may have waived privilege over the underlying attorney-client communications themselves, not just the AI documents. That downstream waiver risk is significant and deserves its own conversation with our clients.

Warner v. Gilbarco, Inc., No. 2:24-cv-12333-GAD-APP, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026)

On the exact same day, Magistrate Judge Anthony P. Patti of the Eastern District of Michigan reached the opposite conclusion in a civil employment discrimination case. The plaintiff, Sohyon Warner, is a pro se litigant. That factual distinction matters enormously and explains much of the divergence between the two decisions.

In Warner, the defendants moved to compel discovery into the plaintiff’s use of ChatGPT in preparing her case materials. Magistrate Judge Patti denied the motion, holding that the AI-generated materials were protected under the work product doctrine and that using a consumer AI tool did not waive that protection. His reasoning rested on two key points. First, work product waiver requires disclosure to an adversary, not just disclosure to a third party. AI is a tool, not a person, so handing information to an AI platform is not the kind of adversarial disclosure that destroys work product protection. Second, because Warner was acting as her own counsel, her use of AI was her litigation preparation. There was no gap between the person using the tool and the person managing the case, which is precisely the gap that sank Heppner’s work product claim.

Reading the two cases together honestly: they are not as contradictory as the headlines suggest. The different outcomes reflect genuinely different facts. Heppner was a represented client who acted unilaterally, used a consumer tool with no confidentiality protections, and fed privileged attorney communications into it without direction from counsel. Warner was an unrepresented party whose AI use was functionally indistinguishable from her own litigation preparation, and the court drew a sensible line on work product waiver that Heppner’s facts never reached. Both cases confirm that courts are examining process, purpose, and supervision, not issuing categorical rules about AI. That is something we can work with.

The Stakes Are Higher on Our Side of the Table

Our clients are already in an asymmetric fight. The government has resources, infrastructure, powerful technology tools and time. We have our judgment, our relationships with clients, and increasingly, these tools. The efficiency argument for AI is genuine in our context, not abstract. Discovery productions that would have taken weeks to process, sentencing research that surfaces disparities we would not otherwise catch, research memos that free us up for the work only we can do. Our clients are better represented when we use these tools thoughtfully.

Which makes the confidentiality risk matter more, not less. When client communications, case theory, or interview details move through an unvetted AI platform, we are potentially routing our most sensitive information through a system that may retain it, use it for model training, or expose it in ways we never considered. Georgia Rule 1.6 does not have an AI carveout. And Heppner adds a specific warning for criminal defense: if our clients independently feed information into a consumer AI tool, including information they received from us, they may be waiving privilege not just over those documents but over the underlying communications with counsel. That is a conversation we need to be having with every client, proactively, before it becomes a discovery fight.

Where Georgia Stands Right Now

The Northern and Middle Districts of Georgia have not issued specific AI guidance as of this writing. The State Bar has not yet published a formal ethics opinion specific to generative AI. ABA Formal Opinion 512, issued in July 2024, provides the most useful national framework currently available and is worth reading. Georgia Rules 1.6 and 5.3, on confidentiality and supervision of nonlawyer assistants respectively, apply fully to AI-assisted work right now regardless of what guidance eventually follows. The honest answer is that Georgia-specific guidance is coming. We are in the window where building sensible practices is still ahead of the courts or the bar requiring them.

Practical Tips

Know what tools are already in use. Before writing any policy, just ask. What is your team using, on what kinds of matters, and with what client information in the mix? Informal tools, personal accounts, consumer platforms. Get an honest picture.

Understand the consumer versus enterprise distinction. Heppner turned in part on the fact that Heppner used the consumer, non-enterprise version of Claude, which is governed by a privacy policy permitting data retention and third-party disclosure. Enterprise versions of major AI platforms typically include data processing agreements that prohibit retention and training on client inputs. That distinction is now legally significant. Know which version of any tool your office is using and get the right agreements in place.

Define what supervised use actually means in your practice and write it down. The lesson from reading Heppner and Warner together is straightforward: courts care about whether an attorney was actually driving the process. Heppner lost because he acted entirely on his own, without any attorney direction. Warner’s pro se plaintiff won because, her AI use was her litigation preparation. There was no daylight between the person using the tool and the person responsible for the case.

For those of us representing clients, the supervision question falls on us. Who directed the AI use? Was an attorney involved in defining the task? Was the output reviewed before it became part of the work product? Those are the questions a court will ask, and we want clear answers ready. Even a brief internal protocol, something that fits on one page, documenting which tools are approved, who can use them, and what review is required before output gets used in a matter, is enough to establish that the process was intentional. That is what protects us and our clients. And yes, an internal protocol (written down) should be considered for solo practitioners too.

Tell our clients explicitly. AI deployment within our practice requires transparency. Add something direct to your engagement letters and client onboarding. Clients should understand that anything they independently input into a consumer AI tool may be discoverable, is almost certainly not privileged, and may waive privilege over communications they have had with us. That warning needs to be specific, not buried.

Document the process briefly. A short notation of which tool was used, what the task was, and what attorney review occurred is enough to demonstrate intentional, supervised use. This is what makes the difference when a court asks.

Working With What We Know

No Eleventh Circuit guidance yet. No Georgia-specific ethics opinions on AI in criminal defense practice. These are two district court decisions, from different circuits, and their reconciliation will take time. What we do know is enough to act on. The doctrine has not changed. What Heppner and Warner together tell us is that courts are looking at process, supervision, and whether the AI use happened inside or outside a legitimate legal relationship. We have enough right now to build something defensible. The cases are decided. The framework is visible. Our clients are better served by thoughtful, structured AI use than by avoidance, and that is entirely within our reach.

Citations and Resources

United States v. Heppner, No. 25-cr-00503-JSR, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026) (written opinion) (Rakoff, J.)

Warner v. Gilbarco, Inc., No. 2:24-cv-12333-GAD-APP, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026) (Patti, M.J.)

ABA Formal Opinion 512, Generative Artificial Intelligence Tools (July 29, 2024), available at https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/aba-formal-opinion-512.pdf

Georgia Rules of Professional Conduct, Rules 1.6 and 5.3, available at https://www.gabar.org/barrules/georgia-rules-of-professional-conduct.cfm

Leave a comment