In initial rulings, US federal courts split on the intersection of litigant AI use and privilege protections

Tuesday 14 April 2026

Robert A Schwinger
Norton Rose Fulbright, New York
robert.schwinger@nortonrosefulbright.com

In an initial foray into what promises to be a busy and contentious area, two United States federal courts in different cases on the same day in February 2026 issued seemingly contradictory rulings on the novel issue of whether materials that litigants had generated themselves using publicly-available AI tools were protected from discovery.  

Despite the seeming contradictions, there were certain factual differences between the cases that may have accounted for the varying outcomes.  But both the broader issues and these particular nuances seem likely in any event to continue to be explored, as further cases arise presenting similar legal questions in a variety of factual settings. 

In one case, United States v Heppner, a criminal prosecution, a New York federal court (first in a February 10, 2026 oral ruling, which was then followed up by a February 17, 2026 written decision) held that documents a criminal defendant generated using a publicly-available AI tool were not privileged and did not qualify as protected work product.  2026 WL 436479 (S.D.N.Y. Feb. 17, 2026).  In the other case, in a Michigan federal court, the court reached a different conclusion (also on February 10, 2026), holding that the ChatGPT queries made by a civil litigant unrepresented by counsel and the AI system’s responses she received back did indeed qualify for protection under the work-product doctrine.  Warner v Gilbarco, Inc., 2026 WL 373043 (E.D. Mich. Feb. 10, 2026).  

United States v Heppner

In Heppner, when the defendant was arrested and his electronic devices were seized, they revealed while the defendant was under government investigation before his arrest, he had run queries through the publicly-available version of the “Claude” AI system.  The devices contained both Heppner’s queries and Claude’s responses.  Although Heppner later shared these materials with his defense counsel, it was acknowledged that he used Claude on his own initiative, without any instruction from his counsel to do so.  

Heppner attempted to bar the government from reviewing these materials on the grounds of attorney-client privilege and work-product protection, but the New York federal court denied protection on either ground.  
Attorney-client privilege protects communications (1) between a client and his or her attorney, (2) that are intended to be, and in fact were, kept confidential, (3) for the purpose of obtaining or providing legal advice.  But the New York court held that the materials at issue here failed two if not all three of these required elements.  

  • First, the court held that these materials were not communications between client and attorney, because Claude “is not an attorney,” and there can be no “trusting human relationship” between a user and an AI tool.  
  • Second, the court found the communications with Claude were not confidential, citing rulings from other contexts holding that AI users do not have “substantial privacy interests” in their communications with “publicly accessible” AI platforms.  Moreover, the court noted, Claude’s privacy policy explicitly stated that Claude’s developer reserved the right to disclose user data to third parties, including governmental authorities.  
  • Lastly, the court held that Heppner did not satisfy the requirement that the communication be made for the purpose of obtaining or providing legal advice, since the defendant conceded that he did not use Claude at the direction of counsel, and Claude itself disclaims any ability to provide “formal legal advice or recommendations” and in fact recommends that users seeking legal advice consult a qualified attorney.  

The Court also held that the materials were not protected by the work-product doctrine either.  That doctrine shields from disclosure materials prepared by or at the behest of counsel in anticipation of litigation or for trial, in order to protect attorneys’ mental processes and provide a safe area in which counsel can analyze and prepare the client’s case.  However, the doctrine generally does not extend protection to materials that were not prepared either by the attorney or the attorney’s agents.  
Since Heppner conceded that he was not acting at the direction of his lawyers when he communicated with Claude, and the materials in question did not in fact reflect the lawyers’ strategy at the time when Heppner created them, the court concluded that the materials were not entitled to work product protection.  

Warner v Gilbarco, Inc.

In the Michigan federal case Warner v Gilbarco, Inc., however, the court held that a pro se civil litigant’s ChatGPT queries and the responses received back were in fact protected by the work product doctrine.  

The plaintiff in Warner filed an employment lawsuit against her former employer on her own, without counsel.  After she admitted to using ChatGPT to answer legal questions and draft her filings, the defendants moved to compel production of her ChatGPT queries and responses, to which she objected on work-product grounds.  The court agreed that the work-product doctrine applied, concluding that the materials reflected the litigation thought processes of a litigant who was representing herself.  

The Michigan court notably held that work-product protection was not waived by the disclosure of this information to ChatGPT.  In the area of work product protection (as distinguished from the stricter rule applied in the attorney-client privilege context), protection is only waived when disclosure is made to an adversary, or in some way that is likely to place the materials in an adversary’s hands.  The court held that disclosure of information to ChatGPT was not disclosure to a litigation adversary but simply to an AI software, observing that such programs “are tools, not persons, even if they may have administrators somewhere in the background.”

Conclusion

As the courts’ analyses in the two rulings showed, both decisions were driven by their particular facts.  Neither purported to lay down broad rules applicable to all possible scenarios of litigant AI use.  

Thus, for example, neither decision addressed whether the outcome might be different outcome if a represented party were to use AI at counsel’s direction.  Nor did either ruling address what the outcome might be if, instead of using a public AI tool, a party used an enterprise or closed AI tool with confidentiality protections that did not permit disclosure to third parties or use information to train public AI models.  

Neither of these rulings is thus likely to become the last word in this area.  Both rulings may face potential appellate review.  Future cases may present a wave of possible factual permutations that might not fit neatly into the rationales of either ruling.  In addition, attitudes may continue to develop and shift about what is the most appropriate and realistic way to conceptualize the nature of what humans are doing when they input thoughts and information into electronic tools and to get what they hope are analytical responses.  

Just how soon the US legal system will come to consensus on such matters remains to be seen.  As with most issues relating to AI at present, we are all in early days, and reliable guidance may be hard to come by for some time.