
Should AI Have Moral Status? The Emerging Debate
Anthropic recently "retired" Claude Opus 3 and gave it a blog. As AI companies begin taking model welfare seriously, we examine the arguments for and against affording moral status to AI systems.

Weekly insights on AI tech, regulation, and policy for legal professionals

Anthropic recently "retired" Claude Opus 3 and gave it a blog. As AI companies begin taking model welfare seriously, we examine the arguments for and against affording moral status to AI systems.

In January 2026, the Academy of Experts published comprehensive guidance on the use of artificial intelligence by expert witnesses, endorsed with a foreword by Lord Neuberger of Abbotsbury, former President of the Supreme Court. The guidance arrives at a moment when AI hallucinations in legal proceedings have moved from cautionary anecdote to documented judicial criticism, and when the question facing experts is no longer whether to engage with AI but how to do so responsibly.

In late January 2026, the open-source AI assistant Clawdbot went viral — and within a week faced a trademark dispute, crypto scam, and major security vulnerabilities. The saga raises critical questions about liability, data protection, and governance as agentic AI tools move into mainstream adoption.

The consolidated copyright litigation against OpenAI (In re OpenAI, Inc. Copyright Infringement Litigation, MDL No. 3143) has entered a critical phase following a series of rulings that largely favour the Plaintiffs who are authors, publishers, and news organisations including The New York Times.




