In The Economist I came across an article earlier this month about the growing use of AI in the legal profession. The subheading said ‘Generative AI could radically alter the practice of law’ and it went on to tell the tale of personal-injury lawyer Steven Schwartz who had used ChatGPT to help him prepare an important court filing. Turns out the AI bot really went to town, using its creative powers to the full – and hallucinating at liberty.
Just imagine the bot proudly presenting its findings to the lawyer: a bundle of entirely made-up cases, rulings and fake judicial opionions and citations, which he assured his master were real and could be found in reputable legal databases. Needless to say that when checks were made (not by said lawyer, who actually submitted the lot to the court without verification), much of it was found to be totally fictitious, but credibly presented.
When having to explain himself, a deeply embarrassed Mr. Schwartz in a court hearing remorsefully admitted that he “did not comprehend that ChatGPT could fabricate cases”. The New York Times reports that “the judge lifted both arms in the air, palms up, while asking Mr. Schwartz why he did not better check his work”. Mr. Schwartz said that he “continued to be duped by ChatGPT. It’s embarrassing.” He also explained he had never used the AI bot before and had simply assumed it was like a super search engine.
So, who is to blame? Not AI, is the conclusion. No. The fault lies squarely with the lawyer who did not consider it necessary to check his AI assistant’s output and to do any additional research for verification.
“This case”, said a legal commentator (quoted from the NY Times) “has reverberated throughout the entire legal professional… It’s a little bit like looking at a car wreck.” And Irina Raicu, who directs the internet ethics program at Santa Clara University, said the case proved that the vast majority of people who are playing with these models don’t really understand how they work, and what their limitations are.
No use therefore complaining about the imperfections or lack of truthfulness of ChatGPT, or ist inventiveness. But no reason either for the legal profession to abandon AI altogether. Considering that lawyers spend a lot of their time scrutinizing long-winded, tiresome documents it would be foolish to deny that there are ‘legal’ tasks that can be admirably executed by AI, and at very impressive speeds.
In the Economist article, this type of AI is called ‘extractive’, as opposed to ‘generative’, AI. It proposes that there is nothing wrong with lawyers using ‘extractive’ AI to analyse texts and provide answers to questions about their contents. Most of us at some stage have been outraged by the fees lawyers charge, and we are aware that lawfirms employ inexperienced staff (new graduates) to do much of the footwork, while still charging their clients the full hourly fee. AI could bring about a change here, which would be beneficial to us as clients.
The take-away from this story is that while generative AI is a treacherous and unreliable source of information and requires careful checking, ‘extractive’ AI is proving to be very useful and will bring down legal fees. And generative AI, if used, must be treated with great circumspection, and responsibly.
Seems to me that the situation in our sector –translation and writing original copy – is rather similar.
And just as lawyers will continue to be needed, although perhaps in lesser numbers and or for somewhat different (less routine) tasks, that is true of the translator profession too.
So, if you are a translator or post-editor, do not rely on the output you get from MT engines, do not switch off your brain and confirm segments with a happy-go-lucky attitude, without reflection. Instead, concentrate, read carefully, look out for things that do not make sense or that seem illogical. If in doubt, check and ask questions. And if you are a translation requester, do not assume that now we have MT and AI no human thought or intervention is needed, or that the tools can supplant humans. Or that millions of words can be processed overnight, without potential disaster. That is not the case.
You can’t blame it on the bot. As always, it’s the translator’s fault.
—————————-
The article in the NY Times can be read here: https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html
Click button bellow to apply to work as a freelancer.
Apply as a freelancer