ChatGPT and Me: AI Use in My Legal Practice
Every few weeks there seems to be a legal news story about a lawyer falling foul of AI use in their practice. Take, for example, the case of Ayinde earlier this month, in which Mr Ayinde’s lawyer tried to bolster her client’s case by relying on principles established in five previous cases.
That would all be routine enough, except for one problem: all five of the cases cited to the court were entirely fictitious.
The lawyer first attempted to pass the fabrications off as “minor citation errors”, before later accepting the seriousness of their inclusion. Although the court made no formal finding as to how the cases came to be cited, careless use of AI was put forward as a pretty likely explanation.
Mr Ayinde’s case - a judicial review of the local authority’s decision to deny him housing despite his medical conditions - was ultimately successful, based on the strength of the medical evidence he adduced to the court. Indeed, the court questioned why fake case law had even been necessary, given the objective strength of Mr Ayinde’s case without them.
As per normal practice, the local authority was also ordered to pay Mr Ayinde’s legal costs. However, this award was reduced by £7,000 because of his lawyer’s conduct. I don’t need to point out the significance of £7,000 to a man who had previously experienced street homelessness, and this serves as a sobering reminder of the consequences when AI and judicial reality collide.
Ayinde's case is, regrettably, not a total anomaly. Like everyone, lawyers find themselves grappling with the ethics of the AI revolution which, we are assured, is coming for us all.
It’s clear that, in its current state, AI cannot be entirely trusted for legal research. There are also major problems with confidentiality, as there is no guarantee that personal information entered into certain AI platforms won’t be randomly spat out at another user on the other side of the world.
There’s also the important question of how clients feel about their lawyers using AI - might they be impressed, or horrified?
I don’t pretend to be immune, but I am extremely careful.
Outside of work, ChatGPT is my bestie and we talk several times a day. Our recent convos include “free activities with a toddler”, “Dulux paint suggestions”, and “Steven King book recommendations”. All these chats generated useful, actionable advice.
At work, my usage is admittedly far less interesting: document comparison, website copy assistance, email redraft. Boring is good though, right?
In my professional life, ChatGPT is a nerdy intern – well-meaning and highly knowledgeable, but to be carefully managed. At work, I use ChatGPT to enhance my wording, improve the flow of a paragraph, draft low-stakes emails. Time-saving is cost-saving, after all, and my clients stand to benefit just as much as me.
I never enter anything confidential or identifiable about my clients or myself. I almost never use ChatGPT for legal research, but on the rare occasions I do, it’s a starting point and not my final answer. I check what is churned out.
I believe there’s still a reason lawyers go to law school, and ultimately, I believe there’s still value in the colourful complexities of human thinking.
I promise I wrote everything above this line myself. To further illustrate, though, I’ll ask ChatGPT to write a blog article about the dangers of using ChatGPT in legal practice. Informal tone, please, with some funny bits if possible. Max 200 words.
The result is pasted below. It’s decent for sure and I guess I agree with the general message, but the tone is way too forceful for my own personal style (“here’s the deal” - yikes).
The language is clearly Americanised – surely a giveaway for a UK lawyer, although probably correctable with further prompting. It simply doesn’t read like me.
And the bigger, real-world problems with using AI in legal practice should not be overlooked. Innovation must be balanced with responsibility, professional obligations, and our duty to always act in a client’s best interests. For me, these will always be paramount.
The Perils of Letting ChatGPT Play Lawyer
So, you’re thinking about letting ChatGPT help with your legal work? Cool. Just one tiny problem: I'm not a lawyer. And I don’t even play one on TV.
Sure, I can draft a snazzy cease-and-desist or explain the Rule Against Perpetuities (kind of), but trusting me to handle your case? That's like asking your barista to perform surgery because they’ve watched Grey’s Anatomy.
Here’s the deal: I don’t know your jurisdiction, I don’t keep up with every new court ruling, and—brace yourself—I occasionally make stuff up. It’s called a “hallucination,” but that’s just a fancy word for confidently spouting nonsense.
And let’s not forget client confidentiality. ChatGPT doesn’t retain your data, but uploading sensitive info into any AI tool? Yeah, your ethics board probably has thoughts about that.
In short: I’m a great research buddy, a decent brief drafter, and a terrible attorney. Use me to brainstorm, sure—but when it comes to real legal advice?
Call an actual lawyer. Preferably one with a license. And pants.