[By Kyana Givens]
This two-part post (today and tomorrow) updates my February 2026 post on US v. Heppner and Warner v. Gilbarco, the first two federal decisions to address AI use and privilege. Since that post, a third significant federal case has arrived, and it opens new territory at the intersection of agentic AI and criminal defense practice that goes beyond the privilege question. Here is what changed, and what I think it means for us.
Three Federal Cases.
Stephanie Wilkins at Legaltech Hub published a careful synthesis of all three decisions on April 6, 2026 that is worth reading start to finish. She covers the cases methodically: What happened, what the courts held, where they agree, where they split. I am not going to repeat that work here. Go read it. What I want to do is take the framework she built and run it through the specific lens of federal criminal defense practice, because the stakes on our side of the table are different, and the lessons land differently, too.
Here is the short version of where the doctrine stands after three cases:
US v. Heppner, No. 1:25-CR-503-JSR-1, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.): A represented person accused of a criminal offense used the free consumer version of Claude, without direction from counsel, to draft defense strategy documents using information he received from his attorneys. No privilege, no work product. The platform’s privacy policy destroyed any confidentiality expectation and, without attorney direction, there was nothing to protect.
Warner v. Gilbarco, Inc., No. 2:24-cv-12333-GAD-APP, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026) (Patti, M.J.): A pro se civil plaintiff used ChatGPT in her case preparation. Work product protection upheld. AI is a tool, not a person, and handing information to a tool is not adversarial disclosure.
Morgan v. V2X, Inc., No. 1:25-CV-1991-SKC-MDB (D. Colo. Mar. 30, 2026) (Dominguez Braswell, M.J.): A pro se civil plaintiff used an unidentified AI platform to process confidential discovery materials. Work product protection upheld for AI outputs. But the platform name itself was not protected, and the court ordered disclosure within ten days.
Then, in a move no prior court made, Magistrate Judge Dominguez Braswell drafted her own AI-specific protective order provision from scratch: Confidential information may only be uploaded to an AI platform whose provider is contractually prohibited from using inputs for training and from disclosing them to third parties, with written documentation of those protections required.
Three courts, no consensus on the hard questions. But the pattern Wilkins identifies is real: traditional doctrine applies without modification, and the outcomes turn on platform, purpose, and supervision, not on the fact of AI use itself.

The Criminal Defense Lens: Our Clients.
Everything above matters. What I want to focus on first is what these three cases mean specifically for our clients, because two of the three decisions involved pro se litigants in civil cases, and the one involving a represented person accused of a crime is the one that went worst.
That is not a coincidence. It is the structure of the problem. Our clients come to us after something has already happened. They received a target letter, or they were arrested, or they are sitting in a detention center trying to understand what is coming. They are scared, they are resourceful, and they have smartphones. The same tools that are free and available to everyone else are free and available to them. And unlike the person in Heppner, they do not have a Quinn Emanuel retainer and a sophisticated understanding of what a grand jury subpoena means before they start typing.
Heppner’s client knew he was a target. He had counsel. He made a deliberate decision to use AI to think through his defense. He still lost everything. The court held that once he fed information received from his attorneys into a consumer AI platform, he waived privilege not just over those documents but potentially over the underlying attorney-client communications themselves. That waiver runs upstream through the entire representation. If a sophisticated, represented person with active counsel can stumble into that outcome, imagine what happens with a client who got a target letter three weeks ago, has not yet retained anyone, and has been working through the facts of their case with ChatGPT every night because it is free and available and answers his questions.
The Client Conversation Has to Happen First.
One of the first instructions I give every client, at the first meeting, before anything else, is some version of this: Do not talk about your case with anyone. Not family. Not friends. Not a cellmate. Not on the phone. The government is listening and the people you trust most can become witnesses against you. That instruction comes first because the damage from ignoring it cannot be undone.
The next instruction, right behind it, now must be some version of this: Do not use AI to think through your case. Not ChatGPT. Not Claude. Not whatever app your cousin showed you. Not even a well-meaning family member running prompts on your behalf, trying to help. Do not type the facts of your case, the names of witnesses, what your attorney told you, or your version of events into any platform I did not hand you with specific instructions. If you already have, tell me, before we go any further.
The logic is identical to the first instruction. Typing your defense strategy into a consumer chatbot is a disclosure. It goes somewhere. It may come back. The same instinct that tells a client not to narrate their case to a stranger should tell them not to narrate it to a platform that retains inputs, uses them for training, and reserves the right to disclose them to governmental regulatory authorities. The Heppner court did not treat Anthropic’s privacy policy as a technicality. It treated it as the controlling fact.
This conversation belongs in engagement letters and every intake meeting, stated plainly and specifically, not buried in boilerplates. What did you use? What did you type? Which platform? When? Did anyone else in your household use AI to investigate your case? The answers shape what I know about privilege going in, and they shape what I disclose going out.
[TO BE CONTINUED TOMORROW]
