Using AI for legal research

How safe is using AI for legal research? On the one hand, AI is making quick progress and keeps getting better. The arrival of a new generation of AI agents will only speed up that process. But on the other hand, we keep getting headlines where law firms are being fined for using AI that referred to non-existing legislation and jurisprudence. In this article, we look at a) how AI is reshaping legal research, b) at the risks and accuracy concerns of using AI in legal research, c) at possible mitigation strategies. Finally, d) we look at using AI for legal research on non-US law.

How is AI reshaping legal research – benefits

AI has been having a significant impact on legal research, and generative AI has certainly sped up that process. Many law firms are using generative AI to assist them with their legal research. It is easy and convenient, as they can ask questions in natural language, rather than having to study some query language. And now that most generative AIs have started offering more advanced research agents that can provide sources, AI has become even more attractive. So, AI is significantly reshaping legal research in several impactful ways. Most of those are beneficial.

One of the most noticeable changes is the enhanced speed and efficiency it brings. AI tools are capable of sifting through vast volumes of legal data in seconds, identifying relevant information much faster than a human could. This efficiency saves lawyers considerable time and resources.

Beyond speed, AI can also improve the accuracy and depth of insight in legal research. By analysing large datasets, AI can detect patterns and extract insights that might go unnoticed by human researchers. It can also flag potential errors or inconsistencies in legal documents, helping to ensure the accuracy and reliability of the information used. But caution is needed, as we will discuss below.

Another major advantage is the broader access to legal information that AI provides. These tools can draw from a wide array of sources, including statutes, case law, legal journals, and specialized databases. This comprehensive reach allows lawyers to develop a fuller understanding of the legal issues they face.

Natural Language Processing (NLP) and machine learning further enhance AI’s capabilities in the legal field. NLP enables AI to comprehend the meaning within legal texts. This allows it to extract key information and identify relevant precedents. Meanwhile, machine learning algorithms can analyse historical case data to predict outcomes. This gives lawyers valuable insights into the strengths and weaknesses of their cases.

AI is also increasingly being integrated into established legal research platforms. This integration improves the efficiency and comprehensiveness of legal research.

However, as AI becomes more embedded in legal practice, responsible usage is essential. Ensuring accuracy, upholding ethical standards, and maintaining regulatory compliance are critical. Lawyers must treat AI as a supportive tool rather than a standalone solution, and it remains vital to verify any information generated by AI systems. Because there are still considerable risks involved in using AI for legal research.

Risks and accuracy concerns of using AI in legal research

In a recent case in California, a judge found that nine out of the twenty-seven quoted sources were non-existent. The two law firms involved (one had delegated research to the other) were fined 31 000 USD. If you follow the news on legal AI, it is a common problem. Apart from that, AI still often is biased, too. Let’s have a closer look at both issues.

Accuracy concerns

AI systems can produce inaccurate, incomplete, or misleading legal information. This is particularly the case when dealing with complex cases, with nuanced legal concepts, or when legislation or jurisprudence has changed recently.

Even worse are AI “Hallucinations”. As witnessed in the example above,AI can generate plausible but factually incorrect information. It is therefore crucial to verify all AI-generated output against credible sources. The Californian example above highlights how this is a serious risk, as one in three sources that were quoted did not exist.

The example also illustrates the risk of reliance on AI without oversight. You cannotassume the AI knows what it’s doing.Over-reliance on AI without thorough human review can lead to errors that compromise case outcomes and erode client trust. 

Bias and ethical concerns

In previous articles, we pointed out that AI inherits and reflects all the biases of the data pool that it was trained upon. This can lead to unfair or discriminatory legal outcomes. So, bias in AI algorithms is a first concern.

Many AI systems cannot explain how they reached their conclusions, or they fail to mention sources. Lack of transparency and accountability, therefore, is a second issue. The algorithms used by AI systems can be opaque, making it difficult to understand how decisions are made and hold the AI system accountable.

Clients may not fully understand the role of AI in their legal representation. This can easily undermine their trust. Clear communication is essential.

As with any online tool lawyers use that share client information, there are data privacy and confidentiality concerns.

Finally, there is the aspect of professional responsibility. Lawyers have a duty to supervise AI-generated work, ensuring it is accurate and ethical. They also must communicate with clients about the use of AI tools.

Mitigation strategies

It is possible to counteract these risks by implementing some mitigation strategies.

  • Always verify AI-generated results against credible legal databases and primary sources.
  • Actively oversee and review AI-generated work to ensure accuracy, as well as ethical compliance.
  • Be transparent with clients about the use of AI tools.
  • Implement robust data security measures to protect client information and comply with privacy regulations.
  • Adhere to ethical guidelines and professional responsibilities when using AI in legal practice.

What about using AI for legal research on non-US law?

Most of the advances in generative AI are being made in the US, and the EU is catching up. How well do the generative AI platforms perform when it comes to non-US law? And are they available in other languages?

Let’s start with the language question: all the major generative AI engines are available in Dutch and French.

Then, what about non-US law? We did some test with European Law, more specifically about GDPR, and overall, these tests went well. We did not test on recent legislation or jurisprudence.

We also briefly did some tests with Belgian law. We thought art. 1382 of the Civil Code would be an interesting test case, given that it was recently replaced by a new book 6 on extra-contractual liability. We ran the test on ChatGPT, CoPilot, Gemini, Claude, Perplexity, Grok, and you.com. Only four out of seven pointed out that art. 1382 CC had been replaced. They were ChatGPT, CoPilot, Gemini and Grok. The other three, Claude, Perplexity, and You.Com, all did not mention book 6 on extracontractual liability at all.

So, while caution and supervision are already needed for US and EU law, it is even more the case for the law of EU member states, where several generative AI platforms were not (yet) aware of recent legislation.

Conclusion

Using AI for legal research holds promise, but supervision is still very much needed. The above examples show how they can still hallucinate, and that they may not be aware of recent changes in legislation or jurisprudence.

Sources: