Sullivan & Cromwell, a leading law firm, has admitted to artificial intelligence 'hallucinations' in a recent court filing. This high-profile incident, involving one of Wall Street's most respected firms, highlights the challenges and risks of integrating AI into sensitive legal work.
The firm issued a formal apology to Chief Judge Martin Glenn of the U.S. Bankruptcy Court for the Southern District of New York. The apology concerned an emergency motion filed in the In re Prince Global Holdings Limited insolvency case, which contained inaccurate case citations and misrepresented legal authorities. Partner Andrew G. Dietderich confirmed that the firm's internal policies for AI use were not followed and its review processes missed the errors before submission. Opposing counsel, Boies Schiller Flexner LLP, reportedly discovered the inaccuracies, leading to a corrected filing and apologies. Sullivan & Cromwell stated that existing safeguards were in place but not applied in this instance.
AI Use Growing in Law Firms
AI tools are rapidly being adopted across the legal sector, with nearly 80% of lawyers now using them, up from just two years ago. While AI offers significant efficiency gains in legal research, drafting, and analysis, its rapid integration has often outpaced proper oversight. 'AI hallucinations'—such as made-up case citations or invented legal precedents—are a common problem, with records showing over 1,300 such incidents in legal filings. Generative AI models can produce text that sounds convincing but is factually wrong, as they prioritize statistical plausibility over accuracy.
Rules and Liability for AI Use
Legal professional bodies and courts are taking note of AI errors. The American Bar Association (ABA) has issued guidance reminding lawyers that using AI does not exempt them from ethical duties like competence and diligence. Under Federal Rule of Civil Procedure 11, attorneys must ensure all court filings are accurate, making them liable for their content even if AI generated it. As a result, some attorneys have faced penalties such as fines, disqualification from cases, or disciplinary referrals for submitting unverified AI content. Local court rules, like those in the Southern District of New York, also require attorneys to certify the accuracy of their submissions. This situation reflects a broader global movement toward establishing AI governance frameworks to ensure responsible use.
Accountability and Trust on the Line
The incident is particularly notable because Sullivan & Cromwell advises OpenAI on AI ethics. This irony underscores a gap between AI policy development and its practical application. The firm's failure to follow its own strict AI usage and review protocols points to a lapse that AI oversight frameworks aim to prevent. Under Federal Rule of Civil Procedure 11, attorneys and firms are ultimately responsible for filing accuracy. Submitting fabricated citations is a serious breach of the duty owed to the court. Having AI policies alone is not enough; they must be strictly enforced. The inability to catch these AI errors—which included misquoted passages and jumbled text—suggests potential issues with quality control or an over-reliance on AI without thorough independent checks, a common problem in the industry. As other courts have imposed sanctions for similar AI mistakes, this incident may lead to increased judicial scrutiny of all AI-assisted legal work.
The Path Forward for AI in Legal Practice
Sullivan & Cromwell's apology and plans to improve AI training and review processes reflect a wider industry understanding: adopting AI in law requires strict diligence and accountability. The legal field must move from simply having policies to fostering a culture of constant verification and risk management. Law firms are expected to create detailed AI governance plans, including strong review processes, mandatory ethics training, and clear client communication. The legal technology market will likely see greater demand for AI tools offering transparency and verifiable results, ensuring technology advances support, rather than undermine, the integrity of legal work.
