In January 2026, The Academy of Experts published comprehensive guidance on the use of artificial intelligence by expert witnesses, endorsed with a foreword by Lord Neuberger of Abbotsbury, former President of the Supreme Court. The guidance arrives at a moment when AI hallucinations in legal proceedings remain on the rise, and when the question facing experts is no longer whether to engage with AI but how to do so responsibly.
The guidance at a glance
The guidance is structured in three parts. Section A provides background on AI technologies, how experts might use them, and the associated risks. Section B sets out the Academy's substantive guidance, including a risk classification framework. Section C offers a practical checklist for experts to apply in their work.
The core message is straightforward: experts remain ultimately responsible for their work product, and AI cannot substitute for independent professional judgment. As the guidance states, experts "cannot divest responsibility or evade their duties when they use AI" (p.5, 2.1).
A risk-based framework
Central to the guidance is a tripartite risk classification for AI use cases:
Prohibited uses include any application that would breach applicable law or regulation, violate the expert's duty to the court, or contravene contractual obligations. The guidance explicitly identifies "complete outsourcing of the expert's work to an AI tool" (p.8, 1.1 (iii)(a)) and uploading confidential case data to public AI systems as falling within this category (p.15).
High-risk uses encompass scenarios where AI generates substantive content for an expert's report, undertakes material analysis underpinning the expert's opinion, or recreates scenarios used as the basis for reasoning. The guidance advises that experts proposing high-risk uses should disclose this to instructing solicitors and obtain confirmation that there are no objections before proceeding.
Low-risk uses include employing AI for self-education, grammar checking, document organisation, and administrative tasks. Even here, the guidance cautions that context matters: using AI to research topics with which the expert has no familiarity could elevate an ostensibly low-risk use into higher-risk territory.
The hallucination problem
The guidance devotes particular attention to AI hallucinations, noting that "the number of reported cases in which hallucinations have come to light is increasing rapidly" (p.6, 3.3 (i)). It cites three recent English cases: Harber v Commissioners for His Majesty's Revenue and Customs [2023] UKFTT 1007 (TC), where nine fictitious tribunal decisions were cited; and two 2025 High Court cases, Al-Haroun v Qatar National Bank (Q.P.S.C.) [2025] EWHC 1495 (KB) and Ayinde v London Borough of Haringey [2025] EWHC 1494 (KB), involving 18 and five fabricated cases respectively.
These incidents, the guidance notes, led to "judicial criticism, regulator referrals, and reputational damage for those involved" (p.6, 3.3 (i)). The implication is clear: experts who rely on AI-generated citations without independent verification do so at considerable professional risk.
Beyond hallucinations: confidentiality, privilege and bias
The guidance identifies several additional risk categories. On confidentiality, it warns that many publicly available AI tools "retain the right to use that material to improve the tool over time" (p.8, 3.3 (ii)), creating the possibility of inadvertent disclosure of privileged or confidential information. The risk is present across AI tools generally, but is particularly acute in relation to public models, where the consequences of inadvertent disclosure may bear directly on the parties and the wider conduct of the proceedings.
On bias, it notes that generative AI tools may produce outputs reflecting "inherent toxicity" or assumptions based on their training data (p.8, 3.3 (iii)). On intellectual property, it flags that both training data and AI-generated outputs may be subject to third-party rights, with potential for infringement claims.
Practical safeguards
For experts who do use AI, the guidance recommends several safeguards. These include maintaining adequate human oversight, documenting key decisions and steps, cross-referencing AI outputs with known facts and data, and "maintain vigilance around AI", as well as "always consider their professional duties and continuously reflect on whether their use of AI impacts these duties" (p.12, 2.1 (vi)). The guidance also emphasises the importance of understanding how a chosen AI tool operates, including its functionality, the type of information it processes, and the nature of its outputs.
Notably, the guidance addresses team dynamics: named experts "must understand if and how their team members are using or have used AI in connection with their work" (p.12, 2.1 (viii)), and should ensure those team members follow the same principles.
Disclosure obligations
The guidance advises experts to verify whether professional duties, applicable laws, or court rules require disclosure of AI use to the court or tribunal. Where there is no clarity on disclosure requirements, it suggests that disclosure "may still be advisable" (p.14, 2.1 (ix)), particularly for high-risk uses. This aligns with the broader trend toward transparency in AI-assisted legal work.
Looking ahead
The Academy's guidance is not the final word on this subject. AI regulation continues to evolve, and the guidance itself acknowledges that experts should "keep up-to-speed" with developments by following academic literature, reviewing legal changes, and undertaking practical training (p.13, 3.2).
For legal practitioners instructing experts, the guidance offers a useful framework for due diligence conversations. For experts themselves, it provides a structured approach to a set of questions that will only become more pressing as AI tools become more capable.
The full guidance, including the checklist and glossary, is available on The Academy of Experts website.
Sources:
- "The Academy of Experts published comprehensive guidance" → https://academyofexperts.b-cdn.net/wordpress/wp-content/uploads/2026/02/fs-26-01-AI.pdf
- "The Academy of Experts website" → https://academyofexperts.org
