Sullivan & Cromwell AI Filing Error Highlights Legal Tech Risks

LAWCOURT
Whalesbook Logo
AuthorKavya Nair|Published at:
Sullivan & Cromwell AI Filing Error Highlights Legal Tech Risks
Overview

Leading law firm Sullivan & Cromwell has apologized to a U.S. bankruptcy court for an AI-generated filing that included inaccurate citations and fake legal sources. The firm acknowledged that its internal AI policies and review processes were not followed, causing 'hallucinations' in the Prince Global Holdings Limited case. This event shows the serious risks of using AI in law, including damage to reputation and potential sanctions, highlighting the urgent need for strict AI rules and checks throughout the industry. S&C has since filed a corrected motion and is improving its AI oversight.

Instant Stock Alerts on WhatsApp

Used by 10,000+ active investors

1

Add Stocks

Select the stocks you want to track in real time.

2

Get Alerts on WhatsApp

Receive instant updates directly to WhatsApp.

  • Quarterly Results
  • Concall Announcements
  • New Orders & Big Deals
  • Capex Announcements
  • Bulk Deals
  • And much more

Sullivan & Cromwell, a leading law firm, has admitted to artificial intelligence 'hallucinations' in a recent court filing. This high-profile incident, involving one of Wall Street's most respected firms, highlights the challenges and risks of integrating AI into sensitive legal work.

The firm issued a formal apology to Chief Judge Martin Glenn of the U.S. Bankruptcy Court for the Southern District of New York. The apology concerned an emergency motion filed in the In re Prince Global Holdings Limited insolvency case, which contained inaccurate case citations and misrepresented legal authorities. Partner Andrew G. Dietderich confirmed that the firm's internal policies for AI use were not followed and its review processes missed the errors before submission. Opposing counsel, Boies Schiller Flexner LLP, reportedly discovered the inaccuracies, leading to a corrected filing and apologies. Sullivan & Cromwell stated that existing safeguards were in place but not applied in this instance.

AI Use Growing in Law Firms

AI tools are rapidly being adopted across the legal sector, with nearly 80% of lawyers now using them, up from just two years ago. While AI offers significant efficiency gains in legal research, drafting, and analysis, its rapid integration has often outpaced proper oversight. 'AI hallucinations'—such as made-up case citations or invented legal precedents—are a common problem, with records showing over 1,300 such incidents in legal filings. Generative AI models can produce text that sounds convincing but is factually wrong, as they prioritize statistical plausibility over accuracy.

Rules and Liability for AI Use

Legal professional bodies and courts are taking note of AI errors. The American Bar Association (ABA) has issued guidance reminding lawyers that using AI does not exempt them from ethical duties like competence and diligence. Under Federal Rule of Civil Procedure 11, attorneys must ensure all court filings are accurate, making them liable for their content even if AI generated it. As a result, some attorneys have faced penalties such as fines, disqualification from cases, or disciplinary referrals for submitting unverified AI content. Local court rules, like those in the Southern District of New York, also require attorneys to certify the accuracy of their submissions. This situation reflects a broader global movement toward establishing AI governance frameworks to ensure responsible use.

Accountability and Trust on the Line

The incident is particularly notable because Sullivan & Cromwell advises OpenAI on AI ethics. This irony underscores a gap between AI policy development and its practical application. The firm's failure to follow its own strict AI usage and review protocols points to a lapse that AI oversight frameworks aim to prevent. Under Federal Rule of Civil Procedure 11, attorneys and firms are ultimately responsible for filing accuracy. Submitting fabricated citations is a serious breach of the duty owed to the court. Having AI policies alone is not enough; they must be strictly enforced. The inability to catch these AI errors—which included misquoted passages and jumbled text—suggests potential issues with quality control or an over-reliance on AI without thorough independent checks, a common problem in the industry. As other courts have imposed sanctions for similar AI mistakes, this incident may lead to increased judicial scrutiny of all AI-assisted legal work.

The Path Forward for AI in Legal Practice

Sullivan & Cromwell's apology and plans to improve AI training and review processes reflect a wider industry understanding: adopting AI in law requires strict diligence and accountability. The legal field must move from simply having policies to fostering a culture of constant verification and risk management. Law firms are expected to create detailed AI governance plans, including strong review processes, mandatory ethics training, and clear client communication. The legal technology market will likely see greater demand for AI tools offering transparency and verifiable results, ensuring technology advances support, rather than undermine, the integrity of legal work.

Get stock alerts instantly on WhatsApp

Quarterly results, bulk deals, concall updates and major announcements delivered in real time.

Disclaimer:This content is for educational and informational purposes only and does not constitute investment, financial, or trading advice, nor a recommendation to buy or sell any securities. Readers should consult a SEBI-registered advisor before making investment decisions, as markets involve risk and past performance does not guarantee future results. The publisher and authors accept no liability for any losses. Some content may be AI-generated and may contain errors; accuracy and completeness are not guaranteed. Views expressed do not reflect the publication’s editorial stance.