No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked

Share

Few lawyers would be foolish enough to let an AI make their arguments, but one already did, and Judge Brantley Starr is taking steps to ensure that debacle isn’t repeated in his courtroom.

The Texas federal judge has added a requirement that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”

Last week, attorney Steven Schwartz allowed ChatGPT to “supplement” his legal research in a recent federal filing, providing him with six cases and relevant precedent — all of which were completely hallucinated by the language model. He now “greatly regrets” doing this, and while the national coverage of this gaffe probably caused any other lawyers thinking of trying it to think again, Judge Starr isn’t taking any chances.

At the federal site for Texas’s Northern District, Starr has, like other judges, the opportunity to set specific rules for his courtroom. And added recently (though it’s unclear whether this was in response to the aforementioned filing) is the “Mandatory Certification Regarding Generative Artificial Intelligence.” Eugene Volokh first reported the news.

All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.

A form for lawyers to sign is appended, noting that “quotations, citations, paraphrased assertions, and legal analysis” are all covered by this proscription. As summary is one of AI’s strong suits, and finding and summarizing precedent or previous cases is something that has been advertised as potentially helpful in legal work, this may end up coming into play more often than expected.

Whoever drafted the memorandum on this matter at Judge Starr’s office has their finger on the pulse. The certification requirement includes a pretty well informed and convincing explanation of its necessity (line breaks added for readability):

These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why.

These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.

As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.

In other words, be prepared to justify yourself.

While this is just one judge in one court, it would not be surprising if others took up this rule as their own. While as the court says, this is a powerful and potentially helpful technology, its use must be at the very least clearly declared and checked for accuracy.

No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked by Devin Coldewey originally published on TechCrunch

Source : No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked