In 2014, a professional negligence claim was successfully tried in favour of Wellesley Partners LLP in relation to amendments made to a Limited Liability Partnership agreement without proper instruction. As a result of the solicitor’s negligence, Wellesley lost around £2.75m but, after issuing proceedings against its former solicitors, was awarded around £1.6m in damages.
The lawyers were held accountable, but can you expect the same level of accountability with artificial intelligence (“AI”)? The quick answer is no.
Who can provide legal advice?
To provide legal advice in England and Wales, you must be authorised to do so and qualified as a solicitor, barrister or chartered legal executive. If a legal representative fails to perform their services to the expected standard, they can be held accountable. You have likely heard of lawyers being sued for negligence or perhaps being “struck off” for breaching the code of conduct and/or principles. This is because legal representatives are held to a high standard, required to undertake significant training and development, and, at all times, are regulated by a governing body.
Nevertheless, many people are still turning to AI to provide services which they could, and in practice ought to, be receiving from qualified and regulated lawyers. Legal representatives are held accountable.
The rise of AI
Historically, solicitors would have been required to draft documents from scratch. Thereafter, with the increased use of legal databases, you would expect that many firms would amend template documents to meet the client’s objectives. Nowadays, it is likely that some firms, companies or individuals will rely on Artificial Intelligence to prepare contracts, agreements or various other legal documents.
Over the past few years, and possibly even more drastically in the last 12 months, there has been a significant increase in the number of AI platforms and applications. Some people may not have used these at all, others rely on AI for menial tasks, but some individuals are blindly relying on the applications without proper consideration of what information they are being provided with and equally as important, the reliability of it.
A vast number of publications discuss the utilisation of AI in the legal system, with claimants and defendants alike relying on AI to provide legal advice or prepare documents. You would be hard pressed to find a lawyer who has not corresponded with an individual whom they suspect of having used AI to draft an email, a court document or provide them with case law to use as precedent. The risks of using AI to provide legal advice are already evident, with so-called “AI hallucination” which sees documents being produced which reference non-existent case law. In several jurisdictions, lawyers have not checked the information provided to them from an AI platform, and when the mistakes were discovered, they have been required to pay costs and/or were barred from practising.
Suing AI
If you have used AI, you will likely recognise the boilerplate declaration that the AI can make mistakes, that the user should always check information or more lengthy terms and conditions which indicate that the information provided should not be relied upon.
Despite this, an Illinois-based company has attempted to sue OpenAI, the owner and developer of ChatGPT, because of the advice it provided to its opponent in proceedings. The company, prior to instigating a claim against OpenAI, was sued by an individual for various claims and relied on ChatGPT to prepare the filings. The individual, supposedly on the “legal advice” of ChatGPT, filed an application to reopen proceedings which had been dismissed and filed superfluous applications following AI’s suggestions to do so. As a result, the company incurred significant legal fees in disputing the claim and is now seeking damages in the amount of $300,000 and $10 million in punitive damages. But is the AI responsible for this, or the individual who blindly followed the advice they were provided?
Interestingly, when you ask ChatGPT about the claim, part of its response states:
“Yes. OpenAI is currently facing a lawsuit claiming ChatGPT helped someone with legal filings and effectively practices law without a license. The case is still ongoing.”
The AI even acknowledges the importance of the case to test the utilisation of AI in the legal system and concludes that “It’s actually a pretty big issue for the future of AI and law.” Personally, I do not think you need AI to state the obvious.
This is not the only case currently against OpenAI, and one must question how many more OpenAI and other AI companies will have to defend over the coming months and years, and how the courts of all jurisdictions will handle them.
Griffin Law is a dispute resolution firm comprising innovative, proactive, tenacious and commercially-minded lawyers. We pride ourselves on our close client relationships, which are uniquely enhanced by our transparent fee guarantee and a commitment to share the risks of litigation. For more details of our services, please email justice@griffin.law or call 01732 52 59 23.
GRIFFIN LAW – TRANSPARENT FEES. TENACIOUS LAWYERS. TRUSTED PARTNERS.
Nothing in this document constitutes any form of legal advice upon which any person can place any form of reliance of any kind whatsoever. We expressly disclaim, and you hereby irrevocably agree to waive, all or any liability of any kind whatsoever, whether in contract, tort or otherwise, to you or any other person who may read or otherwise come to learn of anything covered or referred to in this document. If you wish to take any action in connection with the subject matter of this document, you should obtain legal advice before doing so.