The promise of Artificial Intelligence in law is indeed exciting, from streamlining research to drafting documents. Yet, for attorneys preparing and filing briefs with a court, this powerful technology presents a significant ethical and practical minefield that demands extreme caution.
Recent incidents across the legal landscape serve as stark reminders: AI is a tool, not a human lawyer, and blind reliance can lead to professional embarrassment, sanctions, potential malpractice claims, and even case dismissal.
The most glaring danger of using generative AI in legal brief writing is its propensity for "hallucinations." AI models, especially large language models, are designed to generate plausible text, not necessarily factual accuracy. This can tragically lead to the creation of non-existent case law, fabricated quotes, and misleading legal conclusions.
Real Life Examples
We've seen this play out in alarming fashion. In May 2023, two New York attorneys faced sanctions for submitting a brief riddled with non-existent cases generated by ChatGPT. Their defense of not knowing AI could fabricate information fell flat, highlighting the non-negotiable duty to verify.
Just two months later, in July 2023, attorneys in the MyPillow case were fined $3,000 each after their AI-generated filing contained over two dozen errors, including fabricated cases, with the judge criticizing their lack of candor.
More recently, in May 2025, a California judge sanctioned two law firms for submitting a brief with "bogus AI-generated research," noting fake citations and "phony" quotes.
These incidents unequivocally underscore that lawyers' ethical obligations of candor toward the tribunal and a duty of competence are never absolved by AI. If AI contributes to a filing, attorneys must personally verify all citations, quotations, and factual or legal assertions against original sources. This means actually pulling up the cases and statutes, not simply trusting the AI's output.
AI can assist with research and drafting, certainly, but it cannot replace the nuanced understanding, strategic thinking, and ethical judgment that a lawyer brings to a case. The final legal arguments and conclusions must always be the product of the attorney's independent judgment. Moreover, lawyers must understand the specific AI tool they are using, including its capabilities and, critically, its limitations, such as whether it's trained on a reliable legal dataset or a general-purpose dataset prone to inaccuracies.
Confidentiality Concerns in Using AI
Beyond accuracy, confidentiality poses another significant risk. Inputting client confidential information into certain AI systems, especially those that "self-learn" or aren't explicitly designed for secure legal use, can create significant confidentiality risks.
Lawyers have an unwavering duty to safeguard client information. Before using any AI tool, attorneys must thoroughly vet AI providers to understand their policies on data retention and sharing, and how they utilize inputted information. For self-learning AI tools, obtaining informed client consent before inputting confidential information may even be necessary, and anonymizing data whenever possible is a wise practice.
Mandatory Disclosure of Use of AI in Court Filings
Courts are also rapidly evolving their stance on AI in legal filings, with many jurisdictions now demanding explicit disclosure if AI was used in researching or drafting a document. This may require identifying the specific AI program, specifying which sections were AI-generated, and certifying that a human diligently reviewed the AI's output for accuracy and relevance.
The Delaware Court of Chancery, for example, has already cautioned litigants that unverified AI use could lead to sanctions, emphasizing that every filing must be "truthful, accurate, and cites to legitimate authorities." Lawyers must stay abreast of their specific court's rules and any standing orders regarding AI usage, as ignorance is not a defense.
While the risks are very real, AI, when used responsibly, can be a valuable asset. The key is to start with legal-specific AI tools, which are generally trained on verified legal databases. Crucially, treat AI output as a first draft at best, requiring thorough human review and editing. Always verify every citation, quotation, and factual assertion against original sources. Understand your AI tool's nuances, prioritize confidentiality by ensuring robust data security, and stay relentlessly informed about ethical guidelines, court rules, and best practices. Law firms, too, should develop clear, firm-wide policies for AI use to ensure consistent and ethical application across the practice.
The integration of AI into legal practice is inevitable. However, for lawyers, the ethical obligations of competence, candor, and client confidentiality remain paramount. By exercising extreme caution, embracing thorough human oversight, and staying informed about evolving best practices, attorneys can harness the power of AI while upholding the highest standards of the legal profession and ensuring the integrity of their court filings. The future of law is intertwined with technology, but the wisdom and judgment of human lawyers will always be indispensable.