Table of Contents
The Body in the Room: Reading Heppner Carefully
The Misread
There is a federal court decision from February of this year that every attorney using AI tools in their practice needs to read carefully. Not because of what it held, but because of what it did NOT hold, and because a significant amount of the published commentary about it is mischaracterizing the scope of the ruling in ways that could lead attorneys to draw precisely the wrong conclusions.
The case is United States v. Heppner, decided in the Southern District of New York on February 10, 2026. If you have read any of the dozens of law firm alerts, bar journal articles, or legal tech newsletters that have covered it since then, there is a meaningful chance you walked away with an incorrect understanding of what the court actually decided.
I want to be direct about this: I spotted this problem early, and I want to walk through it carefully, because the practical stakes for your practice are real. The misread pushes in the direction of excessive caution about attorney AI use that the Heppner court specifically did not address. And the issues the court DID definitively resolve, about client AI use and the conditions under which privilege is lost, are being underemphasized in ways that leave attorneys without the practical guidance they actually need.
So let’s look at what Heppner actually says, what it actually does NOT say, what the ethics guidance actually requires, and where the real traps are, including one that most of the Heppner commentary is not even discussing: the AI meeting notetaker problem.
The Facts
The defendant in Heppner, facing federal criminal charges, used the free consumer version of Anthropic’s Claude, not an enterprise tool, not a paid subscription, the free public-facing tier, to generate thirty-one documents analyzing his legal exposure. He then forwarded those documents to his attorneys.
The government moved to compel production of those documents. The defendant argued they were protected by attorney-client privilege and, alternatively, as work product. The court rejected both arguments and ordered production.
The Three Grounds
The court held that the AI-generated documents were not protected by attorney-client privilege on three independent grounds. The specificity of these grounds is exactly what most of the published commentary is glossing over.
Ground one: Claude is not a lawyer. The attorney-client privilege protects confidential communications between a client and their attorney made for the purpose of obtaining legal advice. Claude is not an attorney. Communications with Claude are not communications with counsel. This ground is uncontroversial and uninteresting for our purposes, though it does foreclose an argument some have made that AI acting as a legal assistant could somehow be analogized to the attorney for privilege purposes when the client is using it.
Ground two: No reasonable expectation of confidentiality. This is the most important ground for practitioners, and the one most analysis is not being sufficiently precise about. The court held that the defendant had no reasonable expectation of confidentiality in his communications with the free consumer version of Claude, because Anthropic’s terms of service for the free consumer tier explicitly reserve the right to access and review conversations. The court relied on the specific terms governing that specific product tier. The free consumer Claude. Not the paid subscription. Not enterprise. The free tier.
The court also cited a companion district court observation from In re OpenAI that AI users do not have substantial privacy interests in their conversations with publicly accessible AI platforms that users voluntarily disclosed to the platform and that the platform retains in the normal course of its business.
Ground three: Not made for the purpose of obtaining legal advice and not under attorney direction. Even if we could somehow get past grounds one and two, the documents would still not be privileged because they were not made at the direction of counsel and were not created for the purpose of facilitating legal advice. The defendant went to Claude on his own, without being asked or directed to do so by his attorneys. The court was explicit that this matters. It noted that the law firm never asked the client to do the AI research he did.
The work product claim failed for a related reason: work product requires that the material be prepared in anticipation of litigation under the direction of counsel. Again, the client acted independently, not at counsel’s direction.
What the Court Explicitly Left Open
Here is where we get to the misread.
The court’s ruling on grounds two and three is specifically tied to the facts of this case: a client using a free consumer AI product on their own initiative without attorney direction. The court did not hold, and conspicuously did not address, several questions that most published commentary is treating as though Heppner resolved them.
What Heppner did NOT decide:
- Whether an attorney’s own use of AI tools in preparing work product is protected. The client was not an attorney. The client was not performing legal work under counsel’s direction. The court was not asked to and did not address attorney use.
- Whether AI tools with enterprise-level contractual confidentiality protections would give rise to a reasonable expectation of confidentiality. The court’s ground two analysis turned on the specific terms of the free consumer tier. An enterprise agreement with a data processing addendum and explicit confidentiality provisions presents a materially different factual picture. The court left this open.
- Whether client use of AI at counsel’s direction under the Kovel doctrine could be protected. The Kovel doctrine, from United States v. Kovel, 296 F.2d 918 (2d Cir. 1961), extends attorney-client privilege to third-party agents who assist the attorney in providing legal services when those agents are acting under the attorney’s supervision and the communication is made for the purpose of obtaining legal advice. The Heppner court explicitly noted that the law firm never asked the client to do the AI research he did. That language is important. It signals that the outcome might have been different if the attorney had directed the client to use an AI tool as part of a specific legal task. The court left the Kovel question open.
The practical upshot: Heppner is a client-use case about a free consumer product. It is not a holding about attorney AI use. It is not a holding about paid enterprise tools. And it leaves open a pathway for Kovel-style analysis when clients use AI at counsel’s direction.
The attorneys publishing alerts saying ‘Heppner means you cannot use AI for privileged work’ are extrapolating from a consumer-client-use case to attorney use, which the court did not address. That is the misread.
That said, Heppner does give us clear, actionable guidance on one thing: you need to counsel your clients about their AI use. If your clients are running their legal problems through consumer AI tools on their own initiative and then forwarding you the output, Heppner tells you those documents are producible. That is a real and significant practice management issue regardless of what product tier they are using.
Attorney AI Use and Privilege: The Actual Analysis
The Question Heppner Didn’t Answer
So if Heppner didn’t answer the question of attorney AI use and privilege, where does that leave us? The answer is: in genuinely unsettled territory, with some useful analogies and guidance from the bar that helps us navigate.
The Email Analogy
In the late 1990s and early 2000s, there was a real and serious debate in the bar about whether attorney-client communications conducted by email were privileged. I recall this well, as it was right around when I became a licensed attorney. The concern was that email passed through third-party servers, that ISPs had technical access to the content, and that this third-party transmission might constitute a disclosure that waived privilege. State bars issued ethics opinions. The ABA weighed in. And eventually, the profession reached a working consensus: email communication between attorney and client is privileged, assuming basic security precautions, because the attorney and client have a reasonable expectation of confidentiality in email even though it passes through third-party infrastructure.
The AI privilege debate is tracking the same trajectory. The question is whether inputting client confidential information into an AI tool constitutes a disclosure to a third party that breaks privilege. And the answer, like the email answer, is going to be fact-specific and product-specific rather than categorical.
Free consumer AI tools with terms reserving the right to access conversations are almost certainly a problematic disclosure, consistent with Heppner’s logic as applied to attorney use.
Paid enterprise AI tools with a data processing agreement, explicit contractual confidentiality provisions, and an opt-out of model training, are materially better, and likely sufficient in the developing professional consensus, though no court has definitively ruled on this in the attorney-use context.
Individual paid subscription tiers (like the Claude Pro subscription) with training turned off, are better than the free tier, but not equivalent to an enterprise agreement with a data processing addendum. Training being turned off addresses one concern, that your inputs could be used to train the model and thus become more broadly accessible. But it does not address whether Anthropic employees or systems can access conversations, what happens in response to legal process directed at Anthropic, or whether a court would treat the paid individual tier as creating a sufficient expectation of confidentiality for privilege purposes.
I actually had a conversation with Claude itself about the Pro subscription in the context of my practice, using Claude as a kind of “senior partner” sounding board, and the response was telling. The conclusion was essentially: training being turned off helps but does not change the recommendation dramatically, and the placeholder approach for client-identifying information remains the cleanest solution. Claude Pro itself said not to trust it not to break privilege. Here is a link to that conversation because I think it is worth reading as an example of how to think through this issue using the very tool you are evaluating.
What ABA Formal Opinion 512 Actually Requires
ABA Formal Opinion 512, issued in July 2024, is the foundational guidance on attorney AI use. It does not categorically prohibit AI use. What it requires is that before inputting client confidential information into any AI tool, an attorney must:
- Understand how the AI provider uses, stores, and potentially trains on submitted data.
- Determine whether the provider’s terms adequately protect confidentiality under Rule 1.6.
- Take steps to avoid inadvertent disclosure, which may include using placeholders for client-identifying information, using only enterprise-tier tools with data processing agreements, or limiting what client-specific information is ever entered.
Opinion 512 also notes that competence under Rule 1.1 requires attorneys to understand the technology well enough to assess these risks. You cannot simply assume an AI tool is safe for client-adjacent work. You have to actually evaluate the product’s data handling.
My Practical Recommendation for Illinois Attorneys
Call the ARDC ethics hotline before you rely on any paid AI subscription for work involving client-identifying information. The hotline exists precisely for questions like this, the staff is genuinely helpful, and the call takes thirty minutes and creates a record that you sought guidance. Illinois Rule 1.6 requires reasonable measures to prevent unauthorized disclosure. Reasonable measures in 2026 include, at minimum, actually reading the terms of the tools you use and making an informed judgment rather than a default assumption.
For the actual work we do here: I use AI as a drafting and research assistant routinely, and I recommend you do too. The efficiency gains are real, and ABA Tech Show 2026 drove home the point that lawyers who don’t use AI to increase efficiency are going to be left behind much the same way lawyers who never adapted to email were. But I use placeholders for client-identifying information, I use it as a starting point that I verify and substantially revise, and I am thoughtful about what goes in. The tool is not the problem. Careless use of the tool is the problem.
The Eavesdropper in the Conference Room: AI Meeting Notetakers
I want to shift to a topic that is getting far less attention than Heppner in the current conversation but may represent a more immediate and more widespread ethics risk for practicing attorneys: AI meeting notetakers.
Tools like Otter.ai, Fireflies.ai, and Zoom’s AI Companion are increasingly standard in law firm environments. They transcribe meetings automatically, generate summaries, extract action items, and create searchable records. For administrative and business development purposes, they are genuinely useful. For client meetings and strategy discussions, they are a minefield.
The Privilege and Confidentiality Problem
The analysis here runs parallel to the Heppner analysis but with an important difference: the attorney is the one introducing the tool, not the client. An AI notetaker that is recording and transcribing a privileged attorney-client meeting is not a neutral observer. It is a third-party service that receives the communication, stores it on its own servers, and generates derivative documents, the transcript and summary, that exist outside your firm’s controlled environment.
The same third-party disclosure concern that applied to the client in Heppner applies here, but now you are the one potentially creating the disclosure. Courts have held that communications in the presence of third parties who are not agents of the attorney may not be privileged. The question of whether an AI notetaking service qualifies as an agent of the attorney for privilege purposes is unsettled, and the terms of most consumer-level notetaking tools do not support that characterization.
The NYC Bar’s Professional Ethics Committee addressed this in Formal Opinion 2025-6, issued in December 2025. The Opinion is careful and worth reading in full, but the core guidance is clear: lawyers must obtain client consent before recording any client meeting, regardless of whether the jurisdiction technically requires only one-party consent under its wiretap statute. The duty of loyalty and the client’s reasonable expectation of confidentiality require disclosure and consent even where it is not legally mandated.
The Opinion also emphasizes that attorneys must understand where the data is stored, how long it is retained, whether it is used for model training, and whether they have a right to delete it. These are the same ABA Opinion 512 inquiries applied to a specific tool category.
The Artifacts Problem
The live recording of the meeting is an obvious concern. But the bigger practical risk is what I call the artifacts: the transcript, the summary, the extracted action items. These exist as documents after the meeting ends. They are stored somewhere, usually on the notetaking service’s servers. They may be shared via links that do not require authentication. They may be retained indefinitely. And if they become the subject of a subpoena or discovery request, they are producible in a way that your own unrecorded mental notes are not.
Think about what a complete AI-generated transcript of a privileged strategy discussion looks like as a discovery document. Every theory you explored. Every weakness you acknowledged. Every client instruction you received. All of it, formatted into clean searchable text, stored on a third-party server.
The default answer for any meeting where attorney-client privilege or attorney work product is at stake is: AI notetakers off. Full stop. You can turn them on for administrative meetings, business development calls, and anything that does not involve client matters. But for client meetings, strategy discussions, and any communication you would normally treat as privileged or work product, the AI notetaker should not be running.
Now, there are exceptions. For example, Plaud, which many lawyers are using, has enterprise-level security, encrypted file transfer and storage, and is even HIPAA compliant. It looks like products and services with this kind of security may be acceptable as not breaking privilege, but it’s safest not to use any of them without checking with your state bar ethics board for guidance.
The Illinois-Specific Wrinkle
Illinois is a two-party consent state under its eavesdropping statute. Recording a conversation without the consent of all parties is not just an ethics issue, it can be a criminal one. And Illinois has also enacted the Biometric Information Privacy Act, which covers voiceprints derived from audio recordings. A class action was filed in December 2025 against Otter.ai alleging BIPA violations based on the creation of voiceprints through AI transcription without proper disclosure or consent. This is not theoretical. The litigation is active.
Before you use any AI transcription tool in your Illinois practice, you need to have obtained consent from all participants, verified that the tool’s data handling is compatible with your confidentiality obligations, and determined whether the tool’s voiceprint creation triggers BIPA compliance requirements. If you are unsure about any of those, the tool should not be running in your meetings.
Practical Guidance
Build AI tool policy into your engagement letters. State clearly that you may use AI tools for certain administrative and research purposes, describe the category of tools and their data handling, and obtain client consent or explicitly exclude certain tools from client matters.
If you are using Zoom for client meetings, verify your firm’s Zoom settings. AI Companion is enabled by default in many paid Zoom configurations. Turning it off for specific meetings is not enough. It can be re-enabled accidentally or by participants who have their own settings. Your firm’s Zoom tenant admin settings should default AI features off for any meetings involving client matters, with a deliberate opt-in for appropriate use cases.
The Phantom Citations Are Coming From Inside the Brief: Hallucinations and Sanctions
I want to spend some time on the hallucination and sanctions issue because the statistics have gotten genuinely alarming, and because I think there is a framing problem in how this risk is being communicated to practitioners.
The AI Hallucination Cases Database, maintained by researcher Damien Charlotin, tracked 712 reported incidents globally as of December 28, 2025, with 484 in the United States alone. Both figures had more than doubled in the four months since August 2025. That is not an occasional problem. That is a trend.
The judicial responses have escalated accordingly. Let me walk through the current spectrum because it is broader than most practitioners realize.
Mata v. Avianca, decided in the Southern District of New York in 2023, remains the landmark case. Attorneys submitted a brief containing AI-fabricated case citations. The court imposed monetary sanctions and required the attorneys to notify the client of the misconduct.
Versant Funding LLC v. Teras, Southern District of Florida, 2025, used language that should get every attorney’s attention: the court characterized the use of non-existent AI citations as ‘careless, negligent and reckless.’ That is not a mild rebuke. Those are the building blocks of a malpractice finding. The sanctions included reimbursement of opposing counsel’s fees and mandatory AI-focused CLE.
Johnson v. Dunn, Northern District of Alabama, 2025, and Byoplanet Int’l v. Johansson, Southern District of Florida, 2025, escalated to attorney disqualification and disclosure of the misconduct to the attorney’s entire book of business—every client, not just the one in the case.
And Nelson Henry v. Joseph Iannone and James Deacetis, Southern District of Florida, January 28, 2026, shows that the trend is continuing in 2026 with no signs of the judiciary becoming more tolerant.
The framing problem: these cases are often presented as warnings about AI tools, as if the lesson is ‘be careful with AI.’ The actual lesson is more specific and more actionable: the lesson is that unverified AI output in a court filing violates your duty of candor under Rule 3.3, your competence obligation under Rule 1.1, and your duty of supervision under Rules 5.1 and 5.3 if the hallucinated filing was produced by a subordinate using AI. The tool is not the problem. The failure to verify is the problem.
The practical requirement is a verification workflow, not abstention from AI. Specifically: every case citation generated by or with AI assistance must be verified against primary sources before it goes into any filing. Not checked to see if it looks plausible. Actually pulled up in Westlaw, Lexis, or FastCase, read, and confirmed to say what you are representing it says. This is not new. You have always been ethically required to cite-check your work. The difference is that AI can generate plausible-looking fake citations at a volume and a verisimilitude that was not previously possible, which makes the cite-checking step more important, not less.
Several courts have now adopted local rules or standing orders requiring disclosure of generative AI use in court filings. The requirements vary but the pattern is consistent: disclose the tool used, certify that a human reviewed the output, and certify that all citations are accurate. If you are practicing in a federal district or state court where you do not know the current AI disclosure requirements, you need to find out before your next filing. This is a competence issue under Rule 1.1.
The Human Authorship Question: Now Settled at the Circuit Level
I want to briefly flag a development from March 2, 2026 that is directly relevant to any IP attorneys in the audience and to any attorney advising clients who work in the creative or tech space.
The Supreme Court denied certiorari in Thaler v. Perlmutter on March 2, 2026, leaving intact the D.C. Circuit’s ruling that the Copyright Act requires copyrightable works to be authored by a human being. The Copyright Office’s position, that AI cannot be an author, that purely AI-generated works are not eligible for copyright protection, and that human creative control is the determinative factor, is now the settled law of the D.C. Circuit with the Supreme Court having declined to review it.
This does not end the litigation landscape around AI and copyright. Training data cases are still working through the district courts. The precise contours of ‘sufficient human control’ are still being developed through Copyright Office registration practice and the courts. But the foundational question—can AI be a legal author of a copyrightable work—is answered, at least until Congress acts or the Court takes a future case.
For IP attorneys advising clients: your clients who are creating works with AI assistance need to understand and document their human creative contribution. Clients who have already registered works with AI involvement and did not disclose it face potential registration challenges. And clients asking you about whether AI-generated work can be protected by copyright now have a clear answer at the circuit level: not without meaningful human authorship. There are important rules regarding AI and trademarks and AI and patents you should be aware of, too, all covered in last week’s episode.
The Practical Checklist: Surviving AI Ethics Without Losing Your License
On Heppner: Read the case, not the summaries. The holding is specific to client use of free consumer AI without attorney direction. It does not address attorney AI use, enterprise tools, or Kovel-directed client use. Counsel your clients about their own AI use and what it means for privilege. Their habit of running legal problems through ChatGPT or Claude on their own initiative before talking to you is now documented as a privilege risk.
On attorney AI use: ABA Opinion 512 is your framework. Understand your tools’ data handling. Use enterprise-tier tools with data processing agreements for anything involving client confidential information. Use placeholders for client-identifying details when working with non-enterprise tools. Call your state bar ethics hotline before you rely on a paid individual subscription for client-adjacent work if you have not already evaluated the product’s terms carefully.
On AI meeting notetakers: Default off for any meeting involving client matters or privileged communications unless you have received guidance from your state bar ethics line about the particular tool and plan being secure enough not to break privilege. Build AI tool disclosure and consent into your engagement letters. Know your jurisdiction’s wiretap and biometric data consent requirements before any recording tool runs in your meetings. Understand that the artifacts, the transcripts and summaries, are the highest-risk documents because they exist and are stored after the meeting ends.
The advice your parents gave you about how just because your friends are doing something doesn’t mean it’s okay holds true here, too. Just because your colleagues trust an AI tool doesn’t mean you don’t have a responsibility to see for yourself whether it’s trustworthy.
On hallucinations and sanctions: The cite-check step is mandatory. Every AI-generated citation must be verified against primary sources before it goes in any filing. Know your court’s AI disclosure requirements. Supervise junior attorneys’ AI use under Rules 5.1 and 5.3.
On copyright and Thaler: The D.C. Circuit’s ruling that AI cannot be a legal author is now settled at the circuit level. Human authorship and human creative control are the legal standard. Document your clients’ creative processes.
This is a rapidly evolving area. More bar opinions are coming. More cases are being decided. The practical guidance in this post reflects the state of the law as of April 2026, and some of it will be refined over time. Stay current. The ARDC ethics hotline and your state bar’s ethics resources are your friends.
FAQ: Your AI and Ethics Questions, Answered (Before They Cost You Your License)
Q: What did the Heppner court actually decide?
A: The court in United States v. Heppner (S.D.N.Y. Feb. 10, 2026) held that thirty-one AI-generated documents created by a criminal defendant using the free consumer version of Claude were not protected by attorney-client privilege, on three independent grounds: the AI is not a lawyer; the free consumer tier’s terms of service did not create a reasonable expectation of confidentiality; and the documents were not made at the direction of counsel for the purpose of obtaining legal advice. The court did not address attorney use of AI, enterprise-tier AI tools, or client use of AI at counsel’s direction under the Kovel doctrine.
Q: Does Heppner mean attorneys cannot use AI for privileged work?
A: No. Heppner did not address attorney AI use. It was a ruling about a client who used a free consumer AI product on his own initiative without attorney direction. Attorneys publishing alerts claiming that Heppner prohibits attorney AI use are extrapolating from facts the court did not address. That said, attorney AI use and privilege is genuinely unsettled territory, and ABA Formal Opinion 512 (July 2024) sets out the framework for how attorneys must evaluate any AI tool before inputting client confidential information.
Q: What is the difference between free, paid, and enterprise AI tools for privilege purposes?
A: The Heppner court’s analysis was specifically tied to the terms of service for the free consumer tier of Claude, which explicitly allow Anthropic to access and review conversations. A paid individual subscription with training disabled is better but still not equivalent to an enterprise agreement with a data processing addendum and explicit contractual confidentiality provisions. Enterprise-tier tools with proper data processing agreements are the most defensible option for client-adjacent work under the developing professional consensus, though no court has definitively addressed attorney use in the enterprise context.
Q: What is the AI meeting notetaker ethics problem?
A: AI transcription tools like Otter.ai, Fireflies.ai, and Zoom’s AI Companion record meetings, generate transcripts and summaries, and store those artifacts on third-party servers. For client meetings involving privileged communications, those artifacts may constitute a third-party disclosure that undermines privilege, and the attorney is the one introducing the tool. NYC Bar Formal Opinion 2025-6 (December 2025) requires client consent before any AI recording tool runs in a client meeting, regardless of local wiretap law requirements. In Illinois, the two-party consent eavesdropping statute and BIPA add additional compliance layers.
Q: What happens to attorneys who submit AI-hallucinated citations?
A: The sanctions spectrum ranges from monetary penalties and mandatory CLE (Mata v. Avianca, Versant Funding v. Teras) to attorney disqualification and disclosure of misconduct to every client in the attorney’s book of business (Johnson v. Dunn, Byoplanet v. Johansson). The conduct underlying these sanctions implicates Rule 3.3 (candor toward the tribunal), Rule 1.1 (competence), and Rules 5.1 and 5.3 (supervision). The lesson is not to avoid AI; it is to verify every AI-generated citation against primary sources before it goes in any filing.
Q: Do I have to disclose AI use in court filings?
A: Increasingly, yes, but the requirements vary by jurisdiction. Several federal districts and state courts have adopted local rules or standing orders requiring disclosure of generative AI use in filings, along with certifications that a human reviewed the output and that all citations are accurate. If you do not know your court’s current AI disclosure requirements, finding out before your next filing is a Rule 1.1 competence obligation.
Q: What did the Supreme Court’s denial of certiorari in Thaler v. Perlmutter mean for copyright?
A: The Supreme Court denied certiorari on March 2, 2026, leaving intact the D.C. Circuit’s ruling that the Copyright Act requires human authorship. The Copyright Office’s position, that AI cannot be a legal author and that purely AI-generated works are not eligible for copyright protection, is now settled law at the circuit level. Clients creating works with AI assistance need to understand and document their human creative contribution, and clients who registered AI-involved works without proper disclosure face potential registration challenges.
Q: What should I put in my engagement letters about AI?
A: At minimum: a clear statement that you may use AI tools for certain administrative and research purposes, a description of the category of tools and their data handling standards, and client consent provisions or explicit exclusions for specific tool categories in client matters. For any AI recording or transcription tools, explicit consent before any meeting where such a tool might run. NYC Bar Opinion 2025-6 and ABA Opinion 512 are the primary guidance documents for what these provisions need to address. Keep an eye out for guidance from your state bar. Your malpractice insurance carrier may have guidance, too.
Intellectual property is one of your most powerful business tools. If you’re ready to build a strong brand and protect what you create, you don’t have to figure it out alone.
I help entrepreneurs across the U.S. make smart, legally sound decisions about their intellectual property. I’m an attorney in Champaign-Urbana, Illinois, but I serve intellectual property clients nationwide.
Ready to protect your work? Book a consultation online at kingpatentlaw.com or call 217-714-8558.
For more information on intellectual property and business law, check out the other posts on this site, listen to my podcast “Spellbinding IP: Patent, Trademark, and Business Strategy” on all major podcast platforms (video available on YouTube, Spotify, and Substack), or follow me on social media at @kingpatentlaw.
Avoid the legal horrors, and keep rocking your IP.
RSS - Posts
