AI systems that outperform humans at most work
Artificial General Intelligence (AGI), also referred to as Strong AI or human-level artificial intelligence, represents a theoretical category of artificial intelligence systems that would possess cognitive abilities equivalent to or surpassing human intelligence across diverse domains and tasks. No universally agreed definition exists for AGI, and the concept remains the subject of intense academic and industry debate.
The term "artificial general intelligence" (AGI) has become ubiquitous in current discourse around AI, yet what AGI means, or whether it means anything coherent at all, is hotly debated in the AI community. Various stakeholders define AGI differently, with some emphasising cognitive versatility, others focusing on autonomous learning capabilities, and still others requiring human-level performance across all intellectual tasks. For example, OpenAI defines AGI as "highly autonomous systems that outperform humans at most economically valuable work", whilst AI researcher Pei Wang offers a definition focused on adaptability: "the ability for an information processing system to adapt to its environment with insufficient knowledge and resources". DeepMind researchers have proposed a framework with five performance levels (emerging, competent, expert, virtuoso, and superhuman) and five autonomy levels (tool, consultant, collaborator, expert, and agent).
AGI systems would theoretically demonstrate the capacity to understand, learn, and apply intelligence to solve problems across multiple disciplines with the same versatility as human intelligence. Unlike Artificial Narrow Intelligence (ANI) systems, which excel within specific domains, AGI would possess the ability to transfer knowledge and skills between different areas, reason about novel situations, and adapt to unfamiliar challenges without requiring task-specific programming or training.
The meaning and likely consequences of AGI have become more than just an academic dispute over an arcane term. The world's biggest tech companies and entire governments are making important decisions on the basis of what they think AGI will entail. These debates encompass fundamental questions about the nature of intelligence, consciousness, and the feasibility of replicating human cognitive abilities in machines.
Significant disagreement exists regarding the pathways to AGI. Meta's Chief AI Scientist Yann LeCun has argued that current large language model (LLM) architectures cannot achieve human-level intelligence, stating "there's absolutely no way that autoregressive LLMs, the type that we know today, will reach human intelligence". In his 2024 Dean W. Lytle Lecture at the University of Washington, LeCun outlined fundamental limitations of current approaches, arguing at approximately 30:40 that LLMs lack the ability to plan, reason effectively, or understand the physical world in ways necessary for human-level intelligence. LeCun contends that the vast majority of human knowledge is not expressed in text but exists in the subconscious understanding developed in early life, representing common sense that LLMs cannot access.
These debates reflect deeper philosophical disagreements about the nature of intelligence and whether human cognition can be replicated through computational means.
AGI systems would theoretically demonstrate capabilities including advanced reasoning across multiple domains, autonomous learning from minimal examples, creative problem-solving in novel situations, and the ability to understand and navigate complex physical and social environments. Examples of potential AGI applications might include scientific research assistants capable of making breakthrough discoveries across disciplines, autonomous systems that can adapt to entirely new environments without specific programming, and educational systems that can teach any subject with human-level pedagogical skill.
However, these remain speculative applications, as no system currently approaches AGI-level capabilities. Current AI systems lack understanding of the physical world, planning capabilities for complex sequences of actions, sophisticated reasoning abilities, and the persistent memory systems that characterise human intelligence.
Predictions regarding AGI timelines vary dramatically across the research community and industry. Some technology leaders have suggested AGI could arrive within three to five years, whilst others argue it may require decades or may not be achievable through current approaches. Speculations about what AGI machines will be able to do are largely based on intuitions rather than scientific evidence, and the history of AI has repeatedly disproved our intuitions about intelligence.
The uncertainty surrounding AGI timelines reflects both the complexity of human intelligence and the fundamental questions that remain unresolved about consciousness, reasoning, and the computational requirements for general intelligence.
AGI considerations increasingly influence AI policy discussions, though specific regulatory frameworks for AGI remain largely theoretical. The EU AI Act does not specifically address AGI systems, though such systems would likely be subject to the highest risk classifications under existing frameworks. Policymakers and regulators grapple with the challenge of developing governance structures for systems that do not yet exist but could have profound societal implications.
The speculative nature of AGI creates challenges for legal frameworks, as traditional regulatory approaches typically respond to demonstrated capabilities and observable risks rather than theoretical possibilities. Legal practitioners should anticipate significant regulatory developments as AGI research progresses and theoretical capabilities approach practical realisation.
Mitchell, M. "Debates on the nature of artificial general intelligence." Science 383, eado7069 (2024). LeCun, Y. "Objective-Driven AI: Towards Machines that can Learn, Reason, and Plan." UW ECE 2023-2024 Dean W. Lytle Electrical & Computer Engineering Endowed Lecture Series, University of Washington, January 24, 2024 (at 30:40). Various industry statements and research publications as cited above.