AI systems acting independently to achieve goals
Agentic AI, also referred to as agentic artificial intelligence or AI agents, is a category of artificial intelligence systems characterised by their capacity for autonomous action and goal-directed behaviour. No universally agreed definition exists across legal, technical, or regulatory contexts, reflecting the nascent nature of this technology and its regulatory treatment.
Agentic AI refers to AI systems that can take independent, goal-directed actions across digital environments. These systems can plan tasks, make decisions, adapt based on results, and interact with software tools or systems with little or no human intervention. Unlike traditional AI systems that primarily respond to specific prompts or inputs, agentic AI operates autonomously to achieve specified objectives through multi-step processes and decision-making sequences.
The distinguishing characteristics of agentic AI include autonomous planning capabilities, independent execution of complex tasks, and the ability to integrate with external systems and tools. These systems possess capabilities including autonomous planning (ability to define actions needed to achieve specified goals), tool integration (direct interaction with external systems, tools and application programming interfaces), and independent execution (multi-step task completion without continuous human intervention).
Agentic AI represents a subset of artificial intelligence that builds upon the fundamental AI categories of Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). Current agentic AI systems operate within the ANI category, being designed for specific tasks or domains whilst exhibiting autonomous behaviour within those constraints. However, the autonomous and adaptive nature of agentic AI distinguishes it from conventional narrow AI applications.
Agentic AI systems frequently incorporate generative AI capabilities, enabling them to create content, responses, or solutions as part of their autonomous operations. These systems may utilise large language models (LLMs) and other generative technologies as components within their broader autonomous frameworks.
Agentic AI systems typically comprise several key architectural components beyond the foundational elements of compute, algorithms, and data. Large Language Models (LLMs) serve as the cognitive foundation, providing natural language understanding, reasoning capabilities, and the ability to interpret goals and generate responses. Tool and Orchestration Layer enables the system to interact with external applications, databases, and services, coordinating multiple tools and managing workflows to achieve complex objectives. Memory Systems allow agents to retain information across interactions, learn from previous experiences, and maintain context over extended periods. Planning and Reasoning Modules enable the system to break down complex goals into actionable steps, evaluate different approaches, and adapt strategies based on outcomes. These components work together with the traditional AI building blocks of compute (processing power for real-time operations), algorithms (decision-making frameworks), and data (training and operational inputs) to create systems capable of autonomous goal achievement.
The regulatory treatment of agentic AI remains largely undefined, with existing legal frameworks struggling to address the unique challenges posed by autonomous AI systems. The EU AI Act does not specifically address AI agents, but system architecture and task breadth may increase risk profiles. However, depending on their application and risk profile, agentic AI systems may be subject to various provisions of existing AI regulations.
The California Consumer Privacy Act (CCPA) addresses some autonomous decision-making through its definition of Automated-Decision Making Technology (ADMT), which includes "any technology that processes personal information and uses a computation to execute a decision, replace human decisionmaking or substantially facilitate human decisionmaking".
Key legal considerations include liability and accountability frameworks, as determining liability when agentic AI causes harm (including financial and reputational) presents a legal grey area. The autonomous nature of these systems raises questions about traditional principal-agent relationships and the extent to which organisations may be held liable for unpredictable autonomous actions.
The regulatory landscape for agentic AI continues to evolve as policymakers grapple with the implications of increasingly autonomous AI systems. Legal practitioners should anticipate significant developments in this area as the technology matures and regulatory frameworks adapt to address the unique challenges posed by AI systems capable of independent action.
Various regulatory and legal sources as cited above, including the EU AI Act (Regulation (EU) 2024/1689), California Consumer Privacy Act provisions on Automated-Decision Making Technology, and emerging legal frameworks addressing autonomous AI systems.