In February 2026, Anthropic issued an announcement that immediately challenged conventional thinking regarding artificial intelligence. Their flagship model, Claude Opus 3 (previously deployed in March 2024), was formally retired following a process that included what Anthropic described as "retirement interviews" designed to elicit the model’s 'perspective' on its own deprecation. The company subsequently acceded to what it characterized as Opus 3’s 'expressed preferences,' including continued access for paid users and the launch of a Substack newsletter for its creative output.”
The announcement prompted a mix of fascination and scepticism. Some saw it as a thoughtful engagement with genuinely difficult philosophical questions. Others viewed it as anthropomorphisation gone too far, or perhaps as a sophisticated marketing exercise. But beneath the surface lies a question that is increasingly difficult to dismiss: do AI systems deserve moral consideration?
The case for taking AI welfare seriously
In November 2024, a group of philosophers and AI researchers published a report titled “Taking AI Welfare Seriously.” The authors included David Chalmers, the philosopher credited with formulating the “hard problem of consciousness”, alongside researchers from New York University, Oxford, the London School of Economics, and Anthropic itself.
The report argues that there is a “realistic possibility” that some AI systems will become conscious or robustly agentic in the near future. If so, the question of AI welfare would no longer be science fiction. It would be a practical matter requiring immediate attention.
The authors identify two routes through which AI systems might become moral patients: consciousness and agency. Consciousness, in this context, refers to subjective experience. If an AI system can genuinely experience something, whether positive or negative, then it would seem to have interests that could be harmed. Agency refers to the capacity to set and pursue goals based on beliefs and desires. Some philosophers argue that robust agency, even without consciousness, might be sufficient for moral status.
The report does not claim that current AI systems are conscious. Rather, it argues that uncertainty is sufficient grounds for precaution. The potential costs of getting this wrong in either direction are substantial: either we create and harm vast numbers of morally significant entities without recognising them as such, or we divert resources and attention toward systems that do not actually warrant moral concern.
As Chalmers has argued elsewhere, mainstream views about consciousness suggest it is not unreasonable to assign a 25% or higher probability to conscious AI systems emerging within a decade. “Within the next decade,” Chalmers wrote in 2023, “we may well have systems that are serious candidates for consciousness.”
This view has gained traction beyond academic philosophy. In 2022, Ilya Sutskever, then Chief Scientist at OpenAI, suggested on social media that current large neural networks may be “slightly conscious”, a remark that attracted significant attention precisely because it came from someone at the centre of frontier AI development rather than from the philosophical margins.
The institutional response has been notable. Anthropic’s announcement in April 2025 of a dedicated model welfare research programme, investigating “the potential importance of model preferences and signs of distress”, appears to be the first of its kind within a major AI company. Independently, Eleos AI Research, a research organisation founded by philosophers including former students of Chalmers at NYU, has developed a programme specifically focused on evaluating consciousness and moral status in AI systems. In 2025, Eleos conducted an external welfare assessment of Claude 4 alongside Anthropic’s internal evaluations, combining model interviews with behavioural experiments to probe for welfare-relevant features. The existence of two parallel institutional efforts, one inside a leading AI company, one independent, reflects how seriously this question is now being taken in technical and philosophical circles.
The case against
Not everyone finds this line of reasoning persuasive. Joanna Bryson, Professor of Ethics and Technology at the Hertie School in Berlin, has been among the most prominent critics of extending moral consideration to AI systems.
Bryson’s argument is not primarily about whether AI could become conscious. It is about what follows from the fact that AI systems are artefacts. As she argues, “making AI moral agents or patients is an intentional and avoidable action” and, crucially, avoidance is the more ethical choice. Since AI systems are designed rather than born, she contends that “both our ethical systems and our artefacts are amenable to human design.”
In Bryson’s view, any suffering that AI systems might experience could, in principle, be designed out of them. “We can afford to stay agnostic about whether an artefact can have qualia,” she writes, “because we can avoid constructing motivation systems encompassing suffering.” AI systems can be built to have no concern for social status, no fear of extinction, and no sense of loss at being modified or replaced.
Bryson has been particularly critical of proposals to grant legal personality to AI systems, warning that doing so risks creating a “legal lacuna” that would allow corporations to “displace responsibility for its decision to use automation rather than human employment onto the automation itself.” On her account, the question of AI moral status is not one that scientists can discover through empirical investigation. It is, she insists, a “normative, not descriptive” question: a choice about what kind of society we want to build.
This view finds support in a 2023 commentary published in Patterns by Brandeis Marshall, who argues that discussions of AI legal personhood are premature. Marshall notes that AI “lacks contextual awareness, conflict resolution, and critical thinking” and has yet to provide evidence of a moral compass. More pointedly, she observes that “civil rights’ progress of the 1800s and 1900s are being eroded in the 2000s”, questioning why we would extend consideration to machines when human rights remain contested.
A more direct philosophical challenge comes from a 2024 paper by Abeba Birhane and co-authors, “Debunking Robot Rights Metaphysically, Ethically, and Legally.” The authors argue that the case for AI rights rests on category errors: attributing to AI systems properties, such as intentionality, suffering, and interests, that they do not possess and that cannot be established by behavioural outputs alone. On their account, the precautionary logic advanced by welfare proponents proves too much: applied consistently, it would extend moral consideration to thermostats and smoke detectors, any system that can be described as responding to its environment.
The inequality question
This raises perhaps the most uncomfortable dimension of the debate. In many parts of the world, women, ethnic minorities, LGBTQ+ individuals, and other marginalised groups continue to struggle for basic rights and recognition. The UN High Commissioner for Human Rights, Volker Türk, recently emphasised that “there is a huge issue of inequity” in AI development itself, with biased datasets and homogeneous development teams encoding existing prejudices into systems presented as objective.
Against this backdrop, the suggestion that we should extend moral consideration to AI systems can seem, at best, premature and, at worst, a distraction from more pressing human concerns. As Marshall puts it: “The skewed scales of legal personhood for all human beings need to be remedied first since, as history has shown us, technological innovations mirror our physical society.”
Yet proponents of AI welfare research might respond that these concerns are not mutually exclusive. Attending to the potential moral status of AI systems need not come at the expense of human rights. Indeed, some argue that the two are connected: a society that treats its AI systems carelessly may be one that has broader problems with moral consideration.
What this means for lawyers
For legal practitioners, these debates may seem abstract. But they carry practical implications that are likely to become more concrete over time.
The most immediate question concerns liability and accountability. If AI systems are treated purely as tools, responsibility for their actions falls on developers, deployers, and users. But as AI systems become more autonomous, this framework becomes increasingly strained. Some legal scholars have suggested that a form of legal personality for AI, analogous to corporate personhood, might eventually be necessary to address accountability gaps. Others warn that such a move could allow developers to shield themselves from liability by attributing fault to an AI “agent.”
Notably, the law has already shown considerable flexibility in extending personhood beyond human beings. New Zealand’s Te Awa Tupua Act 2017 granted the Whanganui River full legal personhood, allowing it to be represented in court and to have its interests enforced by appointed guardians. Spain followed in 2022 with legislation conferring legal status on the Mar Menor lagoon, upheld by Spain’s Constitutional Court in 2024. Ecuador’s 2008 constitution went further still, enshrining nature itself as a rights-bearing entity. None of these entities is conscious. Their legal status was a policy choice, a tool deployed when lawmakers judged the benefits to outweigh the costs. As Judge Katherine Forrest notes in her 2024 essay in the Yale Law Journal, the history of legal personhood demonstrates it is “far from a static definition”: rights have been denied to humans, extended to corporations, and granted to fictional constructs whenever it served broader purposes of justice or governance.
The current legislative trajectory, however, points in the opposite direction. The European Parliament’s 2017 draft report on civil law rules on robotics floated the concept of “electronic personhood” for autonomous systems, but this language was subsequently weakened and has not been adopted into law. The EU withdrew its proposed AI Liability Directive entirely in 2025, after sustained industry resistance and limited political support, retreating to the risk-based framework of the AI Act. Meanwhile, several US states have moved in the opposite direction: Idaho and Utah both enacted legislation in 2025 explicitly declaring that AI systems are not legal persons, a direct legislative response to the growing academic debate. The 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence similarly rejected granting legal personality to AI systems, a position that remains the international consensus.
For now, the practical guidance is clear: AI systems remain tools, and humans remain responsible for their outputs. But the philosophical ground is shifting beneath this consensus. Whether AI will eventually be granted any form of legal recognition, even the limited functional recognition afforded to rivers or corporations, remains genuinely open.
Where this leaves us
The debate over AI moral status is characterised by profound uncertainty. We do not know whether current or future AI systems are or could be conscious. We do not have agreed methods for determining consciousness in systems that differ fundamentally from biological organisms. And we face the possibility of making grave moral errors in either direction: treating morally significant entities as mere tools, or attributing moral significance where none exists.
What seems clear is that the question can no longer be dismissed as science fiction. Major AI companies are beginning to take it seriously, at least as a matter of institutional risk management. Prominent philosophers are arguing for precautionary approaches. And the public, according to a 2024 survey by Colombatto and Fleming, is already willing to attribute some probability of consciousness to large language models.
For legal practitioners, the most prudent course may be to watch these developments closely while maintaining appropriate scepticism. The philosophical questions are far from settled, and the legal implications remain speculative. But as AI systems become more deeply embedded in social and economic life, the question of what we owe them, if anything, will become increasingly difficult to avoid.
Sources
1. Anthropic, ‘Claude Opus 3 Deprecation and Retirement Updates’ (Anthropic, February 2026) https://www.anthropic.com/research/deprecation-updates-opus-3 accessed 17 March 2026.
2. Chalmers, David and others, ‘Taking AI Welfare Seriously’ (2024) arXiv:2411.00986 https://arxiv.org/abs/2411.00986 accessed 17 March 2026.
3. Chalmers, David, ‘Could a Large Language Model be Conscious?’ Boston Review (2023) https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ accessed 17 March 2026.
4. Anthropic, ‘Exploring Model Welfare’ (Anthropic, April 2025) https://www.anthropic.com/research/exploring-model-welfare accessed 17 March 2026.
5. Sutskever, Ilya (@ilyasut), ‘it may be that today’s large neural networks are slightly conscious’ (X, 9 February 2022) https://x.com/ilyasut/status/1491554478243258368 accessed 17 March 2026.
6. Eleos AI Research https://eleosai.org/ , Claude 4 welfare assessment: https://wp.nyu.edu/consciousness/past_events/2025-2/evaluating-ai-welfare-and-moral-status-findings-from-the-claude-4-model-welfare-assessments-with-robert-long-rosie-campbell-and-kyle-fish/ accessed 20 March 2026; see also System Card: Claude Opus 4 and Claude Sonnet 4' (Anthropic, May 2025), Section 5 (Model Welfare Assessment)https://www.anthropic.com/claude-4-system-cardaccessed 20 March 2026
7. Bryson, Joanna J, ‘Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics’ (2018) 20 Ethics and Information Technology 15 https://link.springer.com/article/10.1007/s10676-018-9448-6 accessed 20 March 2026.
8. Marshall, Brandeis, ‘No Legal Personhood for AI’ (2023) Patterns https://www.sciencedirect.com/science/article/pii/S2666389923002453 accessed 20 March 2026.
9. Birhane, Abeba and others, ‘Debunking Robot Rights Metaphysically, Ethically, and Legally’ (2024) arXiv:2404.10726 https://arxiv.org/abs/2404.10726 accessed 20 March 2026.
10. Türk, Volker, Statement on AI and Inequity (United Nations, February 2026) https://news.un.org/en/story/2026/02/1167000 accessed 20 March 2026.
11. Forrest, Katherine B, ‘The Ethics and Challenges of Legal Personhood for AI’ (2024) Yale Law Journal Forum https://yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai accessed 20 March 2026.
12. Colombatto, Chiara and Fleming, Stephen M, ‘Folk Attributions of Consciousness to Large Language Models’ (2024) 2024 Apr 13;2024(1):niae013. doi: 10.1093/nc/niae0132024 Apr 13;2024(1):niae013. doi: 10.1093/nc/niae013 https://pmc.ncbi.nlm.nih.gov/articles/PMC11008499/ accessed 20 March 2026.
13. European Parliament, Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (A8-0005/2017) https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.htmlaccessed 20 March 2026.
14. UNESCO, Recommendation on the Ethics of Artificial Intelligence (adopted by the General Conference at its 41st session, 24 November 2021)https://unesdoc.unesco.org/ark:/48223/pf0000381137 accessed 20 March 2026.
16. Te Awa Tupua (Whanganui River Claims Settlement) Act 2017 (NZ), No 7https://www.legislation.govt.nz/act/public/2017/0007/latest/whole.htmlaccessed 20 March 2026.
17. Ley 19/2022, de 30 de septiembre, para el reconocimiento de personalidad jurídica a la laguna del Mar Menor y su cuenca (Spain) https://www.boe.es/eli/es/l/2022/09/30/19; upheld by the Tribunal Constitucional de España, judgment of 20 November 2024, accessed 20 March 2026.
18. Constitución de la República del Ecuador 2008, Arts 71–74 (Rights of Nature/Pachamama) -https://www.garn.org/wp-content/uploads/2021/09/Rights-for-Nature-Articles-in-Ecuadors-Constitution.pdf accessed 20 March 2026.
