The Counsel Nexus: When AI Conversations Are Privileged — and When They Are Not
Posted:
Time to read:
Within a single week of February 2026, two US federal courts reached opposite conclusions on whether generative AI conversations are legally protected. The divergence reveals a clear principle: privilege over AI communications depends not on the platform used, but on the role a qualified attorney plays in directing its use.
The Four Cases at a Glance
For attorney-client privilege to apply: (1) a communication between client and attorney, (2) kept confidential, and (3) made for obtaining legal advice.
In United States v Heppner (SDNY, Feb 17, 2026), Judge Rakoff ruled that a criminal defendant’s Claude conversations—defence strategy documents—were discoverable. Heppner created them without attorney direction, shared them with a platform whose privacy policy permits governmental disclosure, and later forwarded them to counsel. None of that saved them.
One week earlier, in Warner v Gilbarco (ED Mich, Feb 10, 2026), Magistrate Judge Patti reached the opposite result. A pro se plaintiff’s ChatGPT sessions were protected as litigation work product. The court held that using an AI platform did not constitute disclosure to an adversary, the materials were disproportionate to discovery needs, and accepting the defendants’ theory would nullify work-product protection in modern drafting environments.
Two earlier decisions established the framework for this ruling. In Da Silva Moore v Publicis Groupe, Judge Peck had approved technology-assisted review (TAR)—attorney-directed predictive coding—for eDiscovery, embedding the principle that AI used under counsel’s supervision attracts the same privilege architecture as counsel-directed professional processes. In Brown v BCA Trading Ltd, the English High Court had extended the same logic to contested TAR applications in civil disclosure, grounding its approval in proportionality and attorney oversight—the same two pillars that animate both 2026 decisions.
The Decisive Variable: Attorney Direction
All four courts applied the same rule: attorney-client privilege protection is a function of the attorney’s role, not the technology’s sophistication. When an attorney directs AI use—as in Da Silva Moore’s seed-set review and Brown’s TAR process—the AI becomes an extension of professional function, and outputs fall within privilege. When a client deploys AI autonomously, as Heppner did, outputs are simply discoverable documents.
Heppner failed on three grounds. First, Claude is not an attorney; attorney-client privilege requires a licensed professional owing fiduciary duties. Second, Anthropic’s privacy policy undermined confidentiality: its reservation of governmental disclosure rights negated reasonable expectation of confidentiality. Third, Heppner did not use Claude to obtain legal advice; Claude itself disclaimed providing it. Work-product protection failed because documents were generated independently, not as part of attorney-directed litigation preparation.
Warner succeeded where Heppner failed, but on different grounds. The court held that work-product waiver requires disclosure to an adversary, and an AI platform is a tool—not a third party—so no adversarial disclosure occurred. Disclosure to an AI platform does not trigger waiver. Attorney-client privilege, requiring a lawyer-client relationship and confidential communications for legal advice, was not at issue.
The Counsel Nexus: A Practical Test
A doctrinal distinction is essential. Attorney-client privilege requires three elements to be simultaneously present: a communication between a client and their attorney, kept confidential, for the purpose of obtaining or providing legal advice. The work product doctrine operates on an entirely different standard: it requires only that materials be prepared in anticipation of litigation, and does not independently require the presence of an attorney. This is precisely why the pro se plaintiff in Warner—who had no attorney—succeeded on work product grounds where the represented defendant in Heppner failed on both. The Counsel Nexus test therefore operates on two tracks: for attorney-client privilege, attorney direction is a threshold requirement; for work product, it is a strengthening factor but not independently sufficient.
Communications involving AI may attract attorney-client privilege, and AI materials may attract work-product protection, where three conditions are met: a qualified attorney directed or authorized the AI’s deployment; the AI’s outputs were integrated into the attorney’s professional work rather than existing as standalone client documents; and the platform’s confidentiality architecture—actual terms, not marketing claims—does not sever the reasonable expectation of privacy that privilege requires.
Heppner met none of these. Warner satisfied the functional conditions for work-product protection even without an attorney, because the pro se litigant prepared materials in anticipation of litigation—the work product doctrine’s own threshold, which does not require attorney direction. Da Silva Moore and Brown v BCA met all three by design: TAR seed sets are attorney-reviewed, attorney-corrected, and processed through enterprise-grade legal platforms with confidentiality commitments.
What This Means for Business
The implications are immediate. First, enterprise AI use in legal matters must be attorney-initiated. Autonomous client use of public AI platforms creates discoverable documents. Legal departments should require that substantive AI use in litigation, regulatory investigations, or significant transactions be explicitly directed in writing by qualified counsel.
Second, platform terms matter legally. Heppner’s confidentiality failure was grounded in Anthropic’s consumer privacy policy. Organizations negotiating enterprise agreements with AI vendors—obtaining tailored commitments restricting disclosure—are structurally stronger on privilege.
Third, the Da Silva Moore and Brown v BCA template—attorney direction, iterative quality control, documented oversight—is the safest approach. Firms should document that a lawyer directed the AI process, reviewed outputs, and integrated them into professional work. That documentation is the evidentiary basis for future privilege claims.
Fourth, for AI vendors serving the legal sector: privilege architecture must be built into the product. A product structurally requiring attorney initiation and sign-off—as TAR platforms required attorney-reviewed seed sets—produces outputs attracting privilege protection by design. A product enabling client-autonomous legal analysis without attorney integration produces discoverable documents.
Four decisions spanning 2012–2026, two continents, and two legal systems converge on one truth. The technology is neutral. Whether AI attracts attorney-client privilege protection depends entirely on the human legal framework within which it is deployed—specifically, the role of a qualified attorney in directing, supervising, and taking professional responsibility for outputs.
The question courts will always ask is: whose professional judgment directed it? Businesses designing AI workflows around a qualified lawyer’s genuine direction will find the law accommodates them. Those treating AI as a client-facing substitute for legal counsel will find, as Heppner did, that outputs sit on the wrong side of the privilege line.
Prasanth Raju is a Counsel and Advocate at the Bombay High Court and the Supreme Court of India.
Share: