This content is for informational and educational purposes only and does not constitute legal advice.
On 13 October 2025, the Chamber of Deputies passed the Bill regulating Artificial Intelligence (AI) systems, including design requirements. The Bill requires AI systems to be designed to allow for human control and monitoring, and must be technically robust and resilient to minimise damage from failures or attacks. AI systems must also be designed to be transparent, ensuring their outputs are understandable and explainable to the people they affect. Systems that interact with humans must be clearly identified as artificial agents, and AI that generates synthetic content, including audio or video, must have outputs that are identifiable as artificially created. The Bill also provides that the use of an AI system will be considered high risk when it presents a significant risk of affecting fundamental rights. High-risk AI systems require a continuous, iterative risk management process throughout their entire lifecycle. The design of high-risk systems must incorporate strong data governance, security standards, and detailed technical documentation. It also provides that high-risk AI must include built-in logging functions to record operational and security events for auditing. It also provides that the design of all AI systems is explicitly prohibited from enabling specific harmful uses, including subliminal manipulation or real-time remote biometric identification in public spaces.
On 15 September 2025, the National Economic Prosecutor’s Office (FNE) in Santiago, Chile, issued Resolution No. 249, ordering the closure of Investigation Roll No. 2660-21 concerning Facebook, Inc., now Meta Platforms, Inc., and WhatsApp LLC, following a complaint lodged on 4 May 2021 regarding potential anticompetitive conduct linked to the May 2021 update of WhatsApp’s Terms and Conditions and Privacy Policy. The investigation examined whether the update, which introduced new optional business functionalities and integration with Meta services, constituted an abuse of dominant position in the national markets for over-the-top instant messaging services between individuals and for business-to-person communications. Evidence showed WhatsApp held a dominant position in the individual messaging market, with 86 per cent penetration among users aged 18 to 65, but the update did not increase data extraction due to the Signal encryption protocol. In the business-to-person segment, WhatsApp’s products had low penetration and faced significant substitutes, limiting potential dominance. The FNE concluded that the update lacked the objective capacity to generate risks or effects contrary to free competition in the analysed markets, while reserving the authority to reopen proceedings if new circumstances arise.
On 11 February 2025, France, the United Nations Environment Program (UNEP) , and the International Telecommunication Union (ITU) announced the coalition for environmentally sustainable artificial intelligence (AI). The coalition brings together 91 partners, including technology companies, governments, and international organisations, to address AI’s environmental impact. The coalition’s initiatives include publishing a position paper to identify challenges in balancing AI’s benefits with its environmental costs, and organising the Frugal AI Challenge to develop energy-efficient AI models for environmental issues. It also focuses on creating a global observatory by International Energy Agency to monitor AI’s energy use and emissions. It also involves developing a roadmap for AI’s environmental impact, establishing best practices for generative AI in environmental knowledge sharing, and setting up a Sustainable AI working group.
On 11 February 2025, Chile, Finland, France, Germany, India, Kenya, Morocco, Nigeria, Slovenia, and Switzerland adopted the Paris Charter on Artificial Intelligence (AI) in the public interest. The charter aims to ensure AI development serves the public interest, focusing on equity, transparency, accountability, and sustainability. It encourages openness in AI and accountability through existing frameworks. The charter calls for safeguards against AI’s potential harms, alongside an affirmative vision to maximise its public benefits, including through open public goods, democratic participation, and sustainable solutions. It also stresses the importance of accessible high-quality data, privacy protection, and smaller, more localised AI models that have a reduced environmental impact.
Last updated: 13/10/2025