Artificial Intelligence


National Commission calls for evidence on the regulation of AI in healthcare

6 January 2026 by

The following piece was also published here on 1 Crown Office Row’s Quarterly Medical Law Review.

AI is set to transform and disrupt the way in which healthcare is delivered.  The Government’s 10-year health plan for England commits the NHS to becoming “the most AI-enabled healthcare system in the world”, supported by the delivery of a new regulatory framework for medical devices including AI.

On 18 December 2025 the “National Commission on the Regulation of AI in Healthcare” published its formal Call for Evidence.[1]


Continue reading →

The Latest Judicial Guidance on AI: White text, bias, fakes, hallucinations, and the use of AI by litigants in person and lawyers

3 December 2025 by

Artificial Intelligence (AI) – Guidance for Judicial Office Holders (31 October 2025)

In the introduction, this Guidance note announces that “It updates and replaces the guidance document issued in April 2025”, which shows the speed at which AI is developing. It “sets out key risks and issues associated with using AI and some suggestions for minimising them”. And there have indeed been problems facing the judiciary lately arising particularly out of “AI hallucinations”. These are incorrect or misleading results that AI models generate.


Continue reading →

AI sued by image library for intellectual property infringement in training models

7 November 2025 by

Stability AI (Defendant) [2025] EWHC 2863 (Ch)

The legal dispute between Getty Images (and its associated companies) and Stability AI revolves around complex issues of copyright infringement, database rights, trademark infringement, and passing off. The arguments centred on the use of Getty Images’ visual content in the training and operation of Stability AI’s generative AI model, Stable Diffusion. Media firm Mischcon de Reya has acclaimed this as the “one of the most anticipated cases in recent years.” The case has significant implications for intellectual property law as it intersects with the development and deployment of AI technologies in the UK.


Background and Parties
The claimants in the case are several related companies under the Getty Images brand. These entities collectively own or have exclusive licenses over millions of high-quality photographic and artistic images referred to as the “Visual Assets” or “Copyright Works.”
Stability AI Limited, the defendant, is a UK-based company that developed the Stable Diffusion AI model, which is a deep learning image generation tool that creates images based on text or image prompts, including around 12.3 million visual assets, together with associated captions, from the Getty Images websites, as well as publicly accessible third-party websites.

According to Getty Images Stability AI scraped millions of their copyright-protected images from its websites without authorisation.

The Core Claims
Getty Images initially brought a broad claim including allegations of primary and secondary copyright infringement, database right infringement, trademark infringement, and passing off. They argued that:
• Stability AI unlawfully used Getty’s copyrighted works without permission to train the AI model.
• The AI model outputs sometimes reproduced Getty’s images or bore their trademarks (watermarks), infringing Getty’s rights.
• Stability AI’s making of the model weights available for download constituted secondary copyright infringement. (Model weights are the values that determine how inputs are transformed into outputs in a neural network, reflecting the strength and direction of connections between artificial neurons after training. During training, optimisation procedures adjust these weights so the model improves at a task; the final set of weights effectively encodes the model’s learned “knowledge” from data. These “weights” are machine-readable parameters, distinct from source code text; they are large arrays of numbers that operationalise the model’s behaviour rather than human-authored narrative code.
• Use of Getty’s trademarked watermarks within generated images constituted trademark infringement.

As the judge observed,

Both sides emphasise the significance of this case to the different industries they represent: the creative industry on one side and the AI industry and innovators on the other. Where the balance should be struck between the interests of these opposing factions is of very real societal importance. Getty Images deny that their claim represents a threat to the AI industry or an attempt to curtail the development and use of AI models such as Stable Diffusion. However, their case remains that if creative industries are exploited by innovators such as Stability without regard to the efforts and intellectual property rights of creators, then such exploitation will pose an existential threat to those creative industries for generations to come.” [para 12]

In her summary of the judgment, Nina O’Sullivan of Mischcon de Reya observes that attention will now turn to the response to the government’s consultation on copyright and GenAI, as it faces pressure from creative industries opposing a general text and data mining exception that would allow AI companies to scrape copyright works unless rights holders expressly opt out.” Getty Images v Stability AI: Unpacking the High Court’s judgment


Continue reading →

Law Pod UK latest episode: the computer says no!

22 April 2022 by

In Episode 163, Rosalind English talks to Ariane Adam and Tatiana Kazim of the Public Law Project about automated decision making (ADM) in the public sector, the problems of transparency and automation bias where these decisions affect people’s rights. This interview was held shortly after the House of Lords Justice and Home Affairs Committee published its report  on new technologies and the application of the law.

We discuss a number of issues, in particular those that arose in the Post Office “Horizon” accountancy scandal, and the case of R (Eisai Ltd) v National Institute for Health and Clinical Excellence [2008] EWCA Civ 438. The defendant, responsible for appraising clinical benefits and cost-effectiveness of health care interventions, had refused to provide the claimant with a fully executable version of the model it used to assess the cost-effectiveness of the claimant’s drugs. The Court of Appeal held that procedural fairness required release of the fully executable version of the model [66]. It rejected the defendant’s claims that disclosure would undermine confidentiality or be overly costly, noting at [65] that the court should be ‘very slow to allow administrative considerations of this kind to stand in the way of its release’. 

The PLP has also published a summary of the JHAC report here.

Law Pod UK is available on Spotify, Apple PodcastsAudioboomPlayer FM,  ListenNotesPodbeaniHeartRadio PublicDeezer or wherever you listen to our podcasts. Please remember to rate and review us if you like what you hear.

The providers of ‘Ride Hailing apps’ and their drivers: another judgment from Amsterdam

19 March 2021 by

Three applicants v Ola Netherlands B.V. C/13/689705 / HA RK 20-258, District Court, Amsterdam (11 March 2021)

An Amsterdam Court has ordered Ola (a smartphone-hailing taxi organisation like Uber) to be more transparent about the data it uses as the basis for decisions on suspensions and wage penalties, in a ruling that breaks new ground on the rights of workers subject to algorithmic management.

James Farrarr and Yaseen Aslam, who won the landmark victory in the UK Supreme Court in February, led the action by a group of UK drivers and a Portuguese driver, who bought three separate cases against Ola and Uber seeking fuller access to their personal data.

The following is a summary of the case against Ola taxis. Anton Ekker (assisted by AI expert Jacob Turner, whom we interviewed on Law Pod UK here) represented the drivers. He said that this case was the first time, to his knowledge, that a court had found that workers were subject to automated decision-making (as defined in Article 22 of the GDPR) thus giving them the right to demand human intervention, express their point of view and appeal against the decision.

The Facts

Ola is a company whose parent company is based in Bangalore, India. Ola Cabs is a digital platform that pairs passengers and cab drivers through an app. The claimants are employed as ‘private hire drivers’ (“drivers”) in the United Kingdom. They use the services of Ola through the Ola Driver App and the passengers they transport rely on the Ola Cabs App.

Proceedings are pending in several countries between companies offering services through a digital platform and drivers over whether an employment relationship exists.

By separate requests dated 23 June 2020, the first two claimants requested Ola to disclose their personal data processed by Ola and make it available in a CSV file. The third claimant made an access request on 5 August 2020. Ola provided the claimants with a number of digital files and copies of documents in response to these requests.

Ola has a “Privacy Statement” in which it has included general information about data processing.

All references in this judgment is to the AVG, which is Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (GDPR).


Continue reading →

Facial Recognition Technology not “In Accordance with Law”

13 August 2020 by

R (on the application of Edward Bridges) v Chief Constable of South Wales Police (Respondent)and Secretary of State for the Home Department and the Information Commissioner, the Surveillance Camera Commissioner and the Police and Crime Commissioner for South Wales (Interested Parties) [2020] EWCA Civ 1058

The Court of Appeal, overturning a Divisional Court decision, has found the use of a facial recognition surveillance tool used by South Wales Police to be in breach of Article 8 of the European Convention on Human Rights (ECHR). The case was brought by Liberty on behalf of privacy and civil liberties campaigner Ed Bridges. The appeal was upheld on the basis that the interference with Article 8 of the ECHR, which guarantees a right to privacy and family life, was not “in accordance with law” due to an insufficient legal framework. However, the court found that, had it been in accordance with law, the interference caused by the use of facial recognition technology would not have been disproportionate to the goal of preventing crime. The court also found that Data Protection Impact Assessment (DPIA) was deficient, and that the South Wales Police (SWP), who operated the technology, had not fulfilled their Public Sector Equality Duty. 


Continue reading →

Government Scraps Immigration “Streaming Tool” before Judicial Review

6 August 2020 by

In response to a legal challenge brought by the Joint Council for the Welfare of Immigrants (JCWI), the Home Office has scrapped an algorithm used for sorting visa applications. Represented by Foxglove, a legal non-profit specialising in data privacy law, JCWI launched judicial review proceedings,, arguing that the algorithmic tool was unlawful on the grounds that it was discriminatory under the Equality Act 2010 and irrational under common law. 

In a letter to Foxglove from 3rd August on behalf of the Secretary of State for the Home Department (SSHD), the Government Legal Department stated that it would stop using the algorithm, known as the “streaming tool”, “pending a redesign of the process and way in which visa applications are allocated for decision making”. The Department denied that the tool was discriminatory. During the redesign, visa application decisions would be made “by reference to person-centric attributes… and nationality will not be taken into account”. 


Continue reading →

Machine Learning in Healthcare: Regulating Transparency

18 June 2020 by

PHG, linked with Cambridge University, provides independent advice and evaluations of biomedical and digital innovations in healthcare. PHG has recently published a series of reports exploring the interpretability of machine learning in this context. The one I will focus on in this post is the report considering the requirements of the GDPR for machine learning in healthcare and medical research by way of transparency, interpretability, or explanation. Links to the other reports are given at the end of this post.

Just a brief summary of machine learning in healthcare (for the detail, go to PHG’s report Machine Learning Landscape).

Machine learning typically denotes “methods that only have task-specific intelligence and lack the broad powers of cognition feared when ‘AI’ is mentioned”. Artificial intelligence (AI) can be defined as “the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” We are only beginning to realise the scope of intelligence that is silicone-based, rather than meat-based, in the reductionist words of neurscientist and author Sam Harris. It is important too to grasp the difference between types of programming. As this report puts it,

Machine learning as a programming paradigm differs from classical programming in that machine learning systems are trained rather than explicitly programmed. Classical programming combines rules and data to provide answers. Machine learning combines data and answers to provide the rules


Continue reading →

The Use of Live Facial Recognition Technology in Scotland: A New North-South Divide?

25 February 2020 by

Earlier this month, the Scottish Parliament’s Justice Sub-Committee on Policing published a report which concluded that live facial recognition technology is currently “not fit” for use by Police Scotland. 

Police Scotland had initially planned to introduce live facial recognition technology (“the technology”) in 2026. However, this has now been called into question as a result of the report’s findings – that the technology is extremely inaccurate, discriminatory, and ineffective. Not only that, but it also noted that the technology would be a “radical departure” from Police Scotland’s fundamental principle of policing by consent.  

In light of the above, the Sub-Committee concluded that there would be “no justifiable basis” for Police Scotland to invest in the technology.  

Police Scotland agreed – at least for the time being – and confirmed in the report that they will not introduce the technology at this time. Instead, they will engage in a wider debate with various stakeholders to ensure that the necessary safeguards are in place before introducing it. The Sub-Committee believed that such a debate was essential in order to assess the necessity and accuracy of the technology, as well as the potential impact it could have on people and communities. 

The report is undoubtedly significant as it reaffirms that the current state of the technology is ineffective. It therefore strengthens the argument that we should have a much wider debate about the technology before we ever introduce it onto our streets. This is important not only on a practical level but also from a human rights perspective, especially set against the backdrop of the technology’s controversial use elsewhere.  


Continue reading →

AI – a tool for the law, or its digital master?

18 November 2019 by

In the latest Henry Brooke Lecture (12th November, hosted by BAILII and Freshfields Bruckhaus Deringer), Supreme Court Justice Lord Sales warned that the growing role of algorithms and artificial intelligence in decision making poses significant legal problems.

He cited as an example a recent case in Singapore. The judge had to decide on mistake in contract – except that the two contracting parties were both algorithms. In that instance the judge was able to identify the human agents behind the programmes, but that will soon not be the case.


Continue reading →

Rise of the algorithms

4 November 2019 by

The use of algorithms in public sector decision making has broken through as a hot topic in recent weeks. The Guardian recently ran the “Automating Poverty” series on the use of algorithms in the welfare state. And on 29 October 2019 it was reported that the first known legal challenge to the use of algorithms in the UK, this time by the Home Office, had been launched. It was timely, then, that the Public Law Project’s annual conference on judicial review trends and forecasts was themed “Public law and technology”.

Basic tech for lawyers

The conference helpfully opened with a lawyer-friendly run down of algorithms and automation. Dr. Reuben Binns (ICO Postdoctoral Research Fellow in AI) drew a number of useful distinctions.

The first was between rule-based and statistical machine learning systems. In rule-based systems, the system is programmed to apply a decision-making tree. The questions asked and the path to a particular outcome, depending on the answers given, can be depicted by way of flow-chart (even if that flow-chart might be very large, involving numerous branches). In contrast, statistical machine learning involves a computer system training itself to spot patterns and correlations in data sets, and to make predictions based on those patterns and correlations. The computer system is first trained on data sets provided by the system designer. Once trained, it can be used to infer information and make predictions based on new data. These systems might be used, for example, to assess the risk of a person re-offending, where the system has been trained on existing data as to re-offending rates. It has long been known that machine-learning systems can be biased, not least because the data on which they are trained is often biased.


Continue reading →

Can we build AI that doesn’t turn on us? Is it already too late?

18 April 2018 by

A report from the UK House of Lords Select Committee on Artificial Intelligence has made a number of recommendations for the UK’s approach to the rise of algorithms. The report ‘AI in the UK: ready, willing and able?’ suggests the creation of a cross-sector AI Code to help mitigate the risks of AI outstripping human intelligence.

The main recommendation in the report is that  autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence. The committee calls for the Law Commission to clarify existing liability law and considers whether it will be sufficient when AI systems malfunction or cause harm to users. The authors predict a situation where it is possible to foresee a scenario where AI systems may

malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord.

The authors of the report confess that it was “not clear” to them or their witnesses whether “new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient”.  Their proposals, for securing some sort of prospective safety, echo Isaac Asimov’s three laws for robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But these elaborations of principle may turn out to be merely semantic.  The safety regime is not just a question of a few governments  and tech companies agreeing on various principles. This is a global problem – and indeed even if Google were to get together with all the other giants in this field, Alibaba, Alphabet, Amazon, Apple, Facebook, Microsoft and Tencent, it may not be able to anticipate the consequences of building machines that can self-improve. 
Continue reading →

New podcast: they’ve come for our cars, when will they go for your brief?

1 September 2017 by

We have just posted a discussion here between 1 Crown Office Row recruit Thomas Beamont and Rosalind English on the reach of Artificial Intelligence into the legal world: click on Episode 10 of our podcast series.

Law Pod UK is freely available for download on iTunes

Related material:

Computer algorithm predicts most Strasbourg judgments

22 January 2017 by

brainwigArtificial intelligence … it’s no longer in the future. It’s with us now.

I posted a review of a book about artificial intelligence in autumn last year. The author’s argument was not that we might find ourselves, some time in the future, subservient to or even enslaved by cool-looking androids from Westworld. His thesis is more disturbing: it’s happening now, and it’s not robots. We are handing over our autonomy to a set of computer instructions called algorithms.

If you remember from my post on that book, I picked out a paragraph that should give pause to any parent urging their offspring to run the gamut of law-school, training contract, pupillage and the never never land of equity partnership or tenancy in today’s competitive legal industry. Yuval Noah Harari suggests that the everything lawyers do now – from the management of company mergers and acquisitions, to deciding on intentionality in negligence or criminal cases – can and will be performed a hundred times more efficiently by computers.

Now here is proof of concept. University College London has just announced the results of the project it gave to its AI researchers, working with a team from the universities of Sheffield and Pennsylvania. Its news website announces that a machine learning algorithm has just analysed, and predicted, “the outcomes of a major international court”:

The judicial decisions of the European Court of Human Rights (ECtHR) have been predicted to 79% accuracy using an artificial intelligence (AI) method.

Continue reading →

No more human rights? Wait. No more lawyers??

28 September 2016 by

415h7k2lel-_sx329_bo1204203200_Not only is God dead, says Israeli professor Yuval Noah Harari, but humanism is on its way out, along with its paraphernalia of human rights instruments and lawyers for their implementation and enforcement. Whilst they and we argue about equality, racism, feminism, discrimination and all the other shibboleths of the humanist era, silicon-based algorithms are quietly taking over the world.

His new book, Homo Deus, is the sequel to Homo Sapiens, reviewed on the UKHRB last year. Sapiens was “a brief history of mankind”, encompassing some seventy thousand years. Homo Deus the future of humankind and whether we are going to survive in our present form, not even for another a thousand years, but for a mere 200 years, given the rise of huge new forces of technology, of data, and of the potential of permissive rather than merely preventative medicine.

We are suddenly showing unprecedented interest in the fate of so-called lower life forms, perhaps because we are about to become one.

Harari’s message in Sapiens was that the success of the human animal rests on one phenomenon: our ability to create fictions, spread them about, believe in them, and then cooperate on an unprecedented scale.  These fictions include not only gods, but other ideas we think fundamental to life, such as money, human rights, states and institutions. In Homo Deus he investigates what happens when these mythologies meet the god-like technologies we have created in modern times.

In particular, he scrutinises the rise and current hold of humanism, which he regards as no more secure than the religions it replaced. Humanism is based on the notion of individuality and the fundamental tenet that each and everybody’s feelings and experiences are of equal value, by virtue of being human. Humanism cannot continue as a credible thesis if the concept of individuality is constantly undermined by scientific discoveries, such as the split brain, and pre-conscious brain activity that shows that decisions are not made as a result of conscious will (see the sections on Gazzaniga’s and Kahneman’s experiments in Chapter 8 “The Time Bomb in the Laboratory”).

…once biologists concluded that organisms are algorithms, they dismantled the wall between the organic and inorganic, turned the computer revolution from a purely mechanical affair into a biological cataclysm, and shifted authority from individual networks to networked algorithms.

… The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality, and promise to provide medicine, education and entertainment customised to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their working with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.

Continue reading →

Welcome to the UKHRB

This blog is run by 1 Crown Office Row barristers' chambers. Subscribe for free updates here. The blog's editorial team is:

Commissioning Editor:
Jasper Gold

Assistant Editor:
Allyna Ng

Editors:
Rosalind English
Angus McCullough KC
David Hart KC
Martin Downs

Jim Duffy
Jonathan Metzer

Free email updates


Enter your email address to subscribe to this blog for free and receive weekly notifications of new posts by email.

Subscribe

Categories


Disclaimer


This blog is maintained for information purposes only. It is not intended to be a source of legal advice and must not be relied upon as such. Blog posts reflect the views and opinions of their individual authors, not of chambers as a whole.

Our privacy policy can be found on our ‘subscribe’ page or by clicking here.

Tags


A2P1 Aarhus Abortion Abu Qatada Abuse Access to justice administrative court adoption ALBA Allison Bailey Al Qaeda animal rights anonymity appeal Appeals Arrest Article 1 Article 1 Protocol 1 Article 2 article 3 article 3 protocol 1 Article 4 article 5 Article 6 Article 7 Article 8 Article 9 article 10 Article 11 article 13 Article 14 Artificial Intelligence Asbestos Assisted Dying assisted suicide assumption of responsibility asylum Attorney General Australia autism benefits Best Interest Bill of Rights biotechnology blogging Bloody Sunday brexit Bribery Business care orders Caster Semenya Catholicism Chagos Islanders charities Children children's rights China christianity citizenship civil liberties campaigners climate change clinical negligence Closed Material Proceedings Closed proceedings Coercion common law confidentiality consent conservation constitution contempt contempt of court Control orders Copyright coronavirus Coroners costs court of appeal Court of Arbitration for Sport Court of Protection covid crime Criminal Law Cybersecurity Damages Dartmoor data protection death penalty defamation deportation deprivation of liberty Detention diplomatic immunity disability discipline disclosure Discrimination disease divorce DNA domestic violence DPA DSD Regulations duty of candour duty of care ECHR ECtHR Education election Employment Employment Law Employment Tribunal enforcement Environment environmental rights Equality Act Ethiopia EU EU Charter of Fundamental Rights EU costs EU law European Court of Justice euthanasia evidence extradition extraordinary rendition Extraterritoriality Fair Trials Family family law Fertility FGM Finance findings of fact football foreign criminals foreign office Foster France freedom of assembly Freedom of Expression freedom of information freedom of speech Free Speech Gambling Gay marriage Gaza gender Gender Recognition Act genetics Germany gmc Google government Grenfell Hate Speech Health healthcare high court HIV home office Housing HRLA human rights Human Rights Act human rights news Huntington's Disease immigration immunity India Indonesia information injunction injunctions inquest Inquests international law internet interview Inuit Iran Iraq Ireland Islam Israel Italy IVF Jalla v Shell Japan Japanese Knotweed Journalism Judaism judicial review jury jury trial JUSTICE Justice and Security Bill Land Reform Law Pod UK legal aid legal ethics legality Leveson Inquiry LGBTQ Rights liability Libel Liberty Libya Lithuania local authorities marriage Maya Forstater mental capacity Mental Health mental health act military Ministry of Justice Mirror Principle modern slavery monitoring murder music Muslim nationality national security NHS Northern Ireland NRPF nuclear challenges nuisance Obituary open justice Osman v UK ouster clauses PACE parental rights Parliament parliamentary expenses scandal Parole patents Pensions Personal Data Personal Injury Piracy Plagiarism planning Poland Police Politics pollution press Prisoners Prisons privacy Private Property Procedural Fairness procedural safeguards Professional Discipline Property proportionality Protection of Freedoms Bill Protest Protocols Public/Private public access public authorities public inquiries public law reasons regulatory Regulatory Proceedings rehabilitation Reith Lectures Religion Religious Freedom RightsInfo Right to assembly right to die Right to Education right to family life Right to life Right to Privacy Right to Roam right to swim riots Roma Romania Round Up Royals Russia S.31(2A) sanctions Saudi Arabia school Schools Scotland secrecy secret justice Section 55 separation of powers Sex sexual offence sexual orientation Sikhism Smoking social media Social Work South Africa Spain special advocates Sports Sports Law Standing statelessness Statutory Interpretation stop and search Strasbourg Strategic litigation suicide Supreme Court Supreme Court of Canada surrogacy surveillance Syria Tax technology Terrorism tort Torture Transgender travel travellers treaty tribunals TTIP Turkey UK UK Constitutional Law Blog Ukraine UK Supreme Court Ullah unduly harsh united nations unlawful detention USA US Supreme Court vicarious liability voting Wales war War Crimes Wars Welfare Western Sahara Whistleblowing Wikileaks Wild Camping wind farms WINDRUSH WomenInLaw World Athletics YearInReview Zimbabwe

Tags


A2P1 Aarhus Abortion Abu Qatada Abuse Access to justice administrative court adoption ALBA Allison Bailey Al Qaeda animal rights anonymity appeal Appeals Arrest Article 1 Article 1 Protocol 1 Article 2 article 3 article 3 protocol 1 Article 4 article 5 Article 6 Article 7 Article 8 Article 9 article 10 Article 11 article 13 Article 14 Artificial Intelligence Asbestos Assisted Dying assisted suicide assumption of responsibility asylum Attorney General Australia autism benefits Best Interest Bill of Rights biotechnology blogging Bloody Sunday brexit Bribery Business care orders Caster Semenya Catholicism Chagos Islanders charities Children children's rights China christianity citizenship civil liberties campaigners climate change clinical negligence Closed Material Proceedings Closed proceedings Coercion common law confidentiality consent conservation constitution contempt contempt of court Control orders Copyright coronavirus Coroners costs court of appeal Court of Arbitration for Sport Court of Protection covid crime Criminal Law Cybersecurity Damages Dartmoor data protection death penalty defamation deportation deprivation of liberty Detention diplomatic immunity disability discipline disclosure Discrimination disease divorce DNA domestic violence DPA DSD Regulations duty of candour duty of care ECHR ECtHR Education election Employment Employment Law Employment Tribunal enforcement Environment environmental rights Equality Act Ethiopia EU EU Charter of Fundamental Rights EU costs EU law European Court of Justice euthanasia evidence extradition extraordinary rendition Extraterritoriality Fair Trials Family family law Fertility FGM Finance findings of fact football foreign criminals foreign office Foster France freedom of assembly Freedom of Expression freedom of information freedom of speech Free Speech Gambling Gay marriage Gaza gender Gender Recognition Act genetics Germany gmc Google government Grenfell Hate Speech Health healthcare high court HIV home office Housing HRLA human rights Human Rights Act human rights news Huntington's Disease immigration immunity India Indonesia information injunction injunctions inquest Inquests international law internet interview Inuit Iran Iraq Ireland Islam Israel Italy IVF Jalla v Shell Japan Japanese Knotweed Journalism Judaism judicial review jury jury trial JUSTICE Justice and Security Bill Land Reform Law Pod UK legal aid legal ethics legality Leveson Inquiry LGBTQ Rights liability Libel Liberty Libya Lithuania local authorities marriage Maya Forstater mental capacity Mental Health mental health act military Ministry of Justice Mirror Principle modern slavery monitoring murder music Muslim nationality national security NHS Northern Ireland NRPF nuclear challenges nuisance Obituary open justice Osman v UK ouster clauses PACE parental rights Parliament parliamentary expenses scandal Parole patents Pensions Personal Data Personal Injury Piracy Plagiarism planning Poland Police Politics pollution press Prisoners Prisons privacy Private Property Procedural Fairness procedural safeguards Professional Discipline Property proportionality Protection of Freedoms Bill Protest Protocols Public/Private public access public authorities public inquiries public law reasons regulatory Regulatory Proceedings rehabilitation Reith Lectures Religion Religious Freedom RightsInfo Right to assembly right to die Right to Education right to family life Right to life Right to Privacy Right to Roam right to swim riots Roma Romania Round Up Royals Russia S.31(2A) sanctions Saudi Arabia school Schools Scotland secrecy secret justice Section 55 separation of powers Sex sexual offence sexual orientation Sikhism Smoking social media Social Work South Africa Spain special advocates Sports Sports Law Standing statelessness Statutory Interpretation stop and search Strasbourg Strategic litigation suicide Supreme Court Supreme Court of Canada surrogacy surveillance Syria Tax technology Terrorism tort Torture Transgender travel travellers treaty tribunals TTIP Turkey UK UK Constitutional Law Blog Ukraine UK Supreme Court Ullah unduly harsh united nations unlawful detention USA US Supreme Court vicarious liability voting Wales war War Crimes Wars Welfare Western Sahara Whistleblowing Wikileaks Wild Camping wind farms WINDRUSH WomenInLaw World Athletics YearInReview Zimbabwe