We use cookies to enhance your browsing experience. If you continue to use our website we will take this to mean that you agree to our use of cookies. If you want to find out more, please view our cookie policy. Accept and Hide [x]
In Episode 231 of Law Pod UK Jim Duffy is joined by David D. Cole, Professor of Law and Public Policy at Georgetown University and former National Legal Director of the American Civil Liberties Union. They discuss the US President’s invocation of emergency powers to deport, to attack vessels on the high seas, and to impose sweeping international trade tariffs.
In this case, the High Court considered the appropriate legal test for leaving findings of fact to juries in Article 2 inquests. Is it that such findings are arguable? Or is it that there is sufficient evidence to support them? The answer, quite firmly, is the latter.
In its judgment in the case of IA & Ors v Secretary of State for the Home Department [2025] EWCA Civ 1516, handed down on 26 November 2025, the Court of Appeal reaffirmed the correct test for establishing the existence of family life between non-core family members under Article 8 of the European Convention on Human rights (“ECHR”). It also clarified the proper conceptual framework for considering the subtle interaction between the rights of non-claimant family members and the UK’s Convention obligations to individuals outside its territory. Finally, it emphasised the centrality of the Government’s immigration policy to any exercise considering the proportionality of an interference with an individual’s Article 8 rights in the immigration context.
Is law up to the problem of discrimination on grounds of (old) age? To discuss this question, Rosalind English is joined in today’s episode by regular Law Pod guest Alasdair Henderson of One Crown Office Row and Nina Georgantzi, a human rights lawyer and academic who serves as head of human rights advocacy at Age Platform Europe. We discuss the “soft law” Recommendation of the Council of Europe passed in 2024, and the proposed UN convention against ageism. Alasdair brings his considerable experience as Equality Commissioner to bear on the discussion, with his experience of litigation in this field under the Equality Act 2010 and other anti-discrimination laws.
Here are the full citations of the cases referred to in this episode:
Seldon (Appellant) v Clarkson Wright and Jakes (A Partnership) (Respondent) [2012] UKSC 16
Higgs v Farmor’s School [2025] EWCA Civ 109 (relevant paras are [171] – [172]
Imperial College Healthcare NHS Trust v Matar [2023] EAT 1
In Buzzard-Quashie v Chief Constable of Northamptonshire Police [2025] EWCA Civ 1397, the Court of Appeal has helpfully restated the law on (civil) contempt of court. The decision – arising out of a longstanding refusal by the Northamptonshire police force (“the police force”) to comply with orders from the Information Commissioner’s Office (“ICO”) and the courts to release footage from officers’ body-worn cameras (“BWV”) – also affirms the liability of a chief constable for the acts and omissions of their subordinates.
The UK Home Office has begun a ten-week public consultation into the use of facial recognition and biometrics technologies by the police, with the view to expanding the rollout of live facial recognition policing (currently limited to ten forces) across the entire UK. Among the Government’s proposals is the creation of a regulator overseeing police implementation of the technology; any new legislation arising from the consultation is unlikely to be in force for at least another two years. The Government has invested over £15 million into facial recognition policing since 2024. Its currently unregulated use has drawn sharp criticism from human rights and civil liberties groups, and in August the Equality and Human Rights Commission warned that its present implementation was disproportionate in its infringement of human rights. Liberty director Akiko Hart responded positively to this week’s announcement of a consultation, but stressed that the Government “must halt the rapid rollout” of facial recognition and ensure that rights-prioritising safeguards are in place. Big Brother Watch director Silkie Carlo called the “consultation necessary but long overdue”, adding that police facial recognition should be paused immediately, pending the consultation’s outcome. Strong tendencies towards racial discrimination in the use of the technology have raised particular concerns, as the Home Office conceded this week: whereas white people are only wrongly identified by the technology at a rate of 0.04%, this occurs at a rate of 5.5% for black people and 4% for Asian people. Earlier this year the Metropolitan Police declined to adopt live facial recognition at September’s far-right ‘Unite the Kingdom’ rally, despite deploying it weeks earlier at the Notting Hill Carnival.
To what extent does the law afford protection to couples looking to foster children, in circumstances where that couple possesses (and vocalises) strong religious beliefs? This was the issue for consideration before Turner J, who heard this appeal in the King’s Bench Division of the High Court. Judgment was handed down on 18 November 2025.
In the introduction, this Guidance note announces that “It updates and replaces the guidance document issued in April 2025”, which shows the speed at which AI is developing. It “sets out key risks and issues associated with using AI and some suggestions for minimising them”. And there have indeed been problems facing the judiciary lately arising particularly out of “AI hallucinations”. These are incorrect or misleading results that AI models generate.
This interesting decision shows the intersection between the right to education and the right to freedom of religion under the ECHR. These are fast evolving rights, particularly Article 9, whose “freedom” stipulation is becoming more important than the “religion” right. Article 9 is more and more often taken to cover the right not to cleave to any religion at all.
In this case the arguments were focussed on the right to education under Article 2 Protocol 1 of the Convention, taken together with Article 9. The main issue before the Supreme Court can be briefly stated. Did religious education and collective worship provided in a school in Northern Ireland breach the rights of a child, and the child’s parents, under Article 2 of the First Protocol (“A2P1”) to the European Convention on Human Rights (“ECHR”) read with Article 9 ECHR?
What is particularly interesting and unusual about this judgment is that it emerges from Northern Ireland with its own history of sectarianism and religious division. The very basis from which the case sprang goes back well over a hundred years; since Partition, the Church of Ireland, the Presbyterian Church in Ireland, and the Methodist Church in Ireland are under the control of what is now the Education Authority, and that is where we start our story, details of which can be found in the Supreme Court’s press summary.
Before we get going on this story, let’s highlight this sharp obvservation about the NI education system in paragraph 88 :
there is no commitment in the core syllabus to objectivity or to the development of critical thought. To teach pupils to accept a set of beliefs without critical analysis amounts to evangelism, proselytising, and indoctrination.
According to Strasbourg Jurisprudence, the State is forbidden to pursue an aim of indoctrination that might be considered as not respecting parents’ religious and philosophical convictions. That is the limit that must not be exceeded [see Kjedsen v Denmark (A/23) (1979–80) 1 EHRR711 at [53]].
In this instance, the Supreme Court did not make a separate and distinct finding of indoctrination. It was unnecessary to do so because conveying information and knowledge in a manner which is not objective, critical, and pluralistic manner amounts to indoctrination.
As I observed in Part I of this article, no UK court has yet issued a judgment in a libel or defamation claim concerning AI-generated content, but several cases and legal actions are emerging and the issue is widely anticipated to reach the courts soon. Proceedings are emerging in other jurisdictions in the US (see Part I) and in Australia.
Belfast- based libel lawyer Paul Tweed is reportedly preparing a group action in the UK against technology providers (including OpenAI, Meta, Google, and Amazon) alleging that their AI chatbots and other AI-generated content breach defamation and privacy laws. The 2013 Defamation Act provides for certain protection for internet intermediaries —specifically the statutory defences found in Section 5. Under this section operators of websites hosting user-generated content may enjoy immunity from suit when they comply with regulations after being notified of defamatory material. Social media platforms or hosts are generally not liable under UK law unless they have knowledge, control, or refuse to act upon notice of defamatory content. Claims must typically be directed at the original author, and intermediary platform liability arises mainly if the author is unidentifiable or unreachable.
This proposed group action will argue that generative AI material produced by the likes of ChatGPT is new material that falls outside of this immunity. Tweed is looking at three alleged grounds to bring an action: defamation by AI chatbots; unauthorised use of works for training AI models; and the creation by AI of fake biographies that he says are being sold by the likes of Amazon. In his letter to the Northern Ireland Affairs Committee (February 2025) Mr Tweed asserted that there have been several serious examples of false allegations and misinformation appearing on a number of the generative AI platforms and chatbots, including “particularly troubling instances” where leading figures from academia and the law have been wrongly accused of serious misconduct.
Put simply, intended parents should avoid embarking on a surrogacy arrangement where they do not meet, have any knowledge of or means of contacting the surrogate who carries their much wanted child. (Mrs Justice Theis DBE)
This case concerned an application by intended parents for a parental order in respect of an 18-month-old child following a surrogacy arrangement with a surrogate in Nigeria whom neither of the intended parents had met and about whom they had no information.
In this episode three environmental law experts gather to discuss how people without deep pockets can avail themselves of the Aarhus Convention to take legal action in respect of environmental harms like pollution and sewage. Environmental law, a subject that barely existed thirty years ago, is now an established part of English law and is where international law, government policy and public interest litigation often meet head-on. Rosalind English introduces the panel moderator, Richard Wald KC, who chairs ELF. Emma Montlake, an executive director of the charity, helps to ensure that environmental decision making is both robust and transparent. And Carol Day of Leigh Day solicitors is one of the most experienced lawyers in bringing environmental challenges through the courts. The full citations of the cases discussed in this episodes is set out below.
River Action intervention in The National Farmers’ Union v Herefordshire Council & Ors [2025] EWHC 536 (Admin) (10 March 2025) (Admin)
The King (on the application of) The Badger Trust, Wild Justice v Natural England and Secretary of State for Environment, Food and Rural Affairs [2025] EWHC 2761 (Admin)
Wildlife & Countryside Link intervention in C G Fry & Son Limited (Appellant) v Secretary of State for Housing, Communities and Local Government (formerly known as Secretary of State for Levelling Up, Housing and Communities) and another (Respondents) UKSC/2024/0108
Council for National Parks intervention in New Forest National Park Authority v (1) Secretary of State for Housing, Communities and Local Government (2) Mr SimonLillington [2025] EWHC 726 (Admin)
HM Treasury v Global Feedback Ltd [2025] EWCA Civ 624 (Global Feedback Ltd has now changed its name to Foodrise Ltd and PTA to Supreme Court granted on 31 October 2025 (see here)
Wild Justice v Pembrokeshire Coast National Park Authority and Adventure Beyond Ltd (Interested Party) [2025] EWHC 2249 (Admin)
We all want to know about American libel law, now that President Trump has launched his pre-action missile at the BBC. If he pursues his claim it will be under Florida law, where his defamation action will not be statute barred. In the UK such claims must be commenced within one year of publication; Florida allows two. There are other significant differences between English and American defamation systems, which I will explore in this and the following post. Whatever the outcome of Trump v the BBC, the question that is occupying libel lawyers in the US at the moment is not a human run journalistic enterprise, whatever its flaws. It is the collision between antiquated libel laws the world over and the runaway publication machine called Artificial Intelligence.
No UK court has yet issued a judgment in a libel or defamation claim concerning AI-generated content, but several cases and legal actions are emerging and the issue is widely anticipated to reach the courts soon. I will discuss these later. There is rather more activity on this front across the pond. American defamation law is very different from ours, but we can see the enormous problems that arise when a technology provider is presented with a libel writ in respect of a statement that has been distributed by AI, if it has caused serious harm to a person’s reputation. A recent example is set out in an article in The New York Times byKen Bensinger, who reports that a solar contractor in Minnesota, called Wolf River Electric, noticed a dramatic fall off in sales.
“When they pressed their former customers for an explanation, the answers left them floored.
The clients said they had bailed after learning from Google searches that the company had settled a lawsuit with the state attorney general over deceptive sales practices. But the company had never been sued by the government, let alone settled a case involving such claims.
Confusion became concern when Wolf River executives checked for themselves. Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.
Unsurprisingly, Wolf River executives decided they had no choice but to sue Google for defamation. This is just one instance of half a dozen libel claims filed in the US over the past two years over content produced by AI tools that generate text and images. Another case dating back to 2023 involved a talk radio host and a Second Amendment advocate (the right to carry a gun) who found out that AI had falsely accused him of embezzlement – this was discovered by a journalist looking up the radio presenter’s name on the internet.
The Court of Appeal in Re D has overturned final care and placement orders made at an Issues Resolution Hearing (“IRH”), stating that judges must give clear, reasoned findings on the threshold criteria under section 31(2) Children Act 1989 (“CA 1989”), even where proceedings are uncontested or parents are absent.
In delivering the judgment, Cobb LJ, with whom Baker LJ and Miles LJ agreed, criticised the short form reasoning used by the Family Court and stressed the need for transparent judicial decision-making when the State intervenes in family life under Article 8 of the European Convention on Human Rights (“ECHR”).
The legal dispute between Getty Images (and its associated companies) and Stability AI revolves around complex issues of copyright infringement, database rights, trademark infringement, and passing off. The arguments centred on the use of Getty Images’ visual content in the training and operation of Stability AI’s generative AI model, Stable Diffusion. Media firm Mischcon de Reya has acclaimed this as the “one of the most anticipated cases in recent years.” The case has significant implications for intellectual property law as it intersects with the development and deployment of AI technologies in the UK.
Background and Parties The claimants in the case are several related companies under the Getty Images brand. These entities collectively own or have exclusive licenses over millions of high-quality photographic and artistic images referred to as the “Visual Assets” or “Copyright Works.” Stability AI Limited, the defendant, is a UK-based company that developed the Stable Diffusion AI model, which is a deep learning image generation tool that creates images based on text or image prompts, including around 12.3 million visual assets, together with associated captions, from the Getty Images websites, as well as publicly accessible third-party websites.
According to Getty Images Stability AI scraped millions of their copyright-protected images from its websites without authorisation.
The Core Claims Getty Images initially brought a broad claim including allegations of primary and secondary copyright infringement, database right infringement, trademark infringement, and passing off. They argued that: • Stability AI unlawfully used Getty’s copyrighted works without permission to train the AI model. • The AI model outputs sometimes reproduced Getty’s images or bore their trademarks (watermarks), infringing Getty’s rights. • Stability AI’s making of the model weights available for download constituted secondary copyright infringement. (Model weights are the values that determine how inputs are transformed into outputs in a neural network, reflecting the strength and direction of connections between artificial neurons after training. During training, optimisation procedures adjust these weights so the model improves at a task; the final set of weights effectively encodes the model’s learned “knowledge” from data. These “weights” are machine-readable parameters, distinct from source code text; they are large arrays of numbers that operationalise the model’s behaviour rather than human-authored narrative code. • Use of Getty’s trademarked watermarks within generated images constituted trademark infringement.
As the judge observed,
Both sides emphasise the significance of this case to the different industries they represent: the creative industry on one side and the AI industry and innovators on the other. Where the balance should be struck between the interests of these opposing factions is of very real societal importance. Getty Images deny that their claim represents a threat to the AI industry or an attempt to curtail the development and use of AI models such as Stable Diffusion. However, their case remains that if creative industries are exploited by innovators such as Stability without regard to the efforts and intellectual property rights of creators, then such exploitation will pose an existential threat to those creative industries for generations to come.” [para 12]
This blog is maintained for information purposes only. It is not intended to be a source of legal advice and must not be relied upon as such. Blog posts reflect the views and opinions of their individual authors, not of chambers as a whole.
Our privacy policy can be found on our ‘subscribe’ page or by clicking here.
Recent comments