A US lawyer made a fool of himself in court by referring to judgments from the past that don't even exist. The supposed “passenger rights expert” simply left the preparation for the process to Chat-GPT.
The repercussions were fatal for him and his client, because he really shouldn't have resorted to the decisions he referred to in his submissions. They simply didn't exist, because they were fictitious. Chat-GPT, for example, had conjured up verdicts against Iran Air or Delta that were never issued. Nor were any such processes.
For the lawyer, the matter was not only embarrassing, but he also got himself into trouble. He had to swear under oath that he had not intentionally deceived the court in New York, but had negligently relied on Chat-GPT. He acknowledged that it was a serious error that he did not examine the work of the AI before submitting the brief to the court.