[By Kyana Givens]
Yesterday, we posted the first half of this piece, and here goes the second half:
The Criminal Defense Lens: The Government.
Here is the lever I think is underused and that the order in Morgan v. V2X, Inc., No. 1:25-CV-1991-SKC-MDB (D. Colo. Mar. 30, 2026) (Dominguez Braswell, M.J.), helps sharpen considerably. The attention in all three cases has focused on the defense side: What happens when a litigant uses AI. But Morgan establishes something bilateral that the privilege conversation has largely missed. Magistrate Judge Dominguez Braswell held that the identity of an AI tool is not protected work product, and that the opposing party has “a legitimate and reasonable need to know which platform was used to assess whether confidential information had been compromised.” That reasoning does not belong exclusively to persons accused of crime. It cuts both ways.
The government is using AI, at a scale largely under-disclosed. Federal agencies, U.S. Attorneys’ Offices, and law enforcement are deploying AI tools in investigations, in evidence analysis, in discovery processing (e.g., Everlaw), and increasingly in the agentic workflows that chain together multiple automated steps before a human reviews the output. Most of that use is invisible in the standard discovery record. It does not appear in a disclosure request because nobody thought to ask for it. Morgan gives us a frame for asking. If the government processed evidence through an AI platform in connection with this prosecution, we have the same legitimate need the person in Morgan was denied: Which platform, what are its data practices, and was confidential information handled in a way that meets the standard that court just articulated? The court’s reasoning was grounded in protecting against the risk that confidential information was compromised through inadequate platform practices. That risk runs in both directions. Defense-side confidential information obtained through discovery, intercepted communications, materials produced under protective order, all of it can be fed into government AI systems. Under Morgan’s logic, the defense has standing to ask how.
Beyond Morgan, the Daubert angle is real, but it is not the only angle. Daubert gets you to the reliability of a specific AI output being offered as evidence. What Morgan potentially opens is something broader: A disclosure obligation about AI tool use in the investigation and case-building process itself, not just at the moment of trial. If the government used an agentic AI workflow (e.g., ChainAnalysis Reactor-in cryptotracing cases) to analyze financial transactions, process communications, or identify leads, the methodology of that analysis may be material to the defense under Brady regardless of whether the output is formally offered as evidence. The decision to charge, the selection of evidence, the framing of the theory of the case: If AI was in that pipeline, we may have a right to know. This is territory courts have not addressed yet in the criminal context. But the foundation is being built, brick by brick, in civil cases. Morgan is a brick. The question for us is whether we are prepared to pick it up and use it.
Practically, that means adding AI disclosure requests to our standard discovery practice now. Not just requests tied to a specific piece of AI-generated evidence, but broader inquiries: Did the government use Ai tools in the investigation? In evidence analysis? In discovery processing? Which platforms? What were the data practices governing those platforms? Were any confidential materials processed through systems without adequate retention and disclosure protections? These are legitimate questions. Morgan gives us a doctrinal hook for asking them. The government’s answers, or its silence, are themselves information.

What Morgan Adds Beyond the Privilege Question.
Morgan is a case I will be watching closely. It is the only one of the three that addresses AI governance prospectively rather than after the fact, and it is the first to establish a forward-looking protective order framework with real teeth. If that framework gains traction, the question of which AI tools are permissible for use on confidential materials is no longer just an internal policy question. It is a question a court may resolve in the middle of a case, with a compliance deadline attached.
The Wilkins order flags the access-to-justice dimension here with appropriate directness: The Morgan protective order “will (at least for now) bar the parties from using most, if not all, mainstream low-to-no-cost AI to process confidential information,” because enterprise-tier tools with adequate contractual protections are available “only through organizational procurement processes, or at costs that a pro se litigant is unlikely to bear.” That is our clients’ side of the table described exactly. And it is, quietly, a description of the resource asymmetry that has always defined this work.
What Can We Do with This Right Now?
There is no Eleventh Circuit guidance. No Georgia-specific ethics opinion on criminal defense AI use. Three district court decisions from different circuits, two involving pro se civil plaintiffs and one involving a represented person in the criminal legal system, who lost badly. The framework is still forming. But waiting for consensus is itself a choice with consequences.
The client conversation is first, and it now sits alongside the do-not-talk instruction as a day-one obligation. The tool inventory in my own practice comes next: What is in use, on which matters, and do the agreements governing those tools meet the standard the Morgan court has now articulated. The discovery practice expands to include AI disclosure requests directed at the government, starting now, before a court tells me I should have asked sooner. And the documentation practice begins today, not when a judge asks for it.
Go read Stephanie Wilkins’ full analysis for the doctrinal breakdown. Come back here for what it means when the client in the chair has a federal criminal charge, a consumer ChatGPT account they have been using since the day the FBI knocked on their door, and a government that used AI to build the case against them in ways that never appeared in a disclosure request.
Additional Resources.
ABA Formal Opinion 512, Generative Artificial Intelligence Tools (July 29, 2024).
Georgia Rules of Professional Conduct, Rules 1.6 and 5.3.
