Civil Procedure Guidance on AI and “Fake Authorities”
31 March 2026
In two weeks’ time my interview with Jacob Turner and Michael Workman on the Judicial Taskforce’s draft Statement and Consultation on AI and private law will come out on Law Pod UK. In the mean time, a short note of the guidance on this subect in Civil Procedure News, put out by The White Book Service (Issue 3/2026 11 March 2026).
The guidance quotes the notorious cases of (R) Ayinde v London Borough of Haringey [2025] EWHC 1383 and R (Munir) v Secretary of State for the Home Department (AI hallucinations[2026] UKUT 81. Both these cases involved the “incautious” use of AI in ways that could result in the loss of privilege through uploading information to an AI tool that is open to the public.
And of course there is the use of fake authorities. In the Ayinde case the UT issued a rare show cause notice, which required an explanation to be given to the question why grounds of appeal to the Tribunal had included citation of a Court of Appeal judgment that could be found nowhere on BAILII and why it also included citation of another Court of Appeal judgment, which while it was available was not authority for the proposition it was said to support.
Had the immigration adviser in question not referred himself to the Immigration Advice Authority, the Tribunal would have so referred Mr Mohammed in order to “stop false material coming before the Tribunal which leads to considerable public expense due to the need to address the problem”.
With regard to the second case, the Tribunal observed that it would be
“easy to think that this is a case about the naïve use of generative AI, but it is not merely that: it is principally about supervision and the obligation to ensure that the tribunal is not misled. It matters not how citation errors come about. Whether they are inserted by a hapless trainee or by ChatGPT is really neither here nor there; the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal…. Failure to check is also wasteful of an opponent’s time, thereby potentially leading (in judicial review proceedings) to large awards of costs.”
As the authors at Civil Procedure note,
“This case raises continuing concerns about the use of fake authorities, notwithstanding the Divisional Court’s guidance in Ayinde. It also, apparently for the first time, raises concerns about the use of open AI tools by lawyers in ways that can result in breaches of client confidentiality and loss of legal professional privilege concerning information uploaded to such tools. It ought to be apparent that the risk of such breaches is not confined to lawers but might also arise through the use of AI tools by, for instance, expert witnesses”.
Tune in for our next epiosde on AI and Private Law, and the proposals for circumventing problems of liability and causation thrown up by autonomy, capacity and the self-teaching capacity of generative AI.



