Artificial Intelligence (AI)

Computer systems that perform human-like intelligent tasks

Core AIDataLegal
Back to Glossary
Updated 8 September 2025

Definition

Artificial Intelligence (AI) is a contested concept for which no universally agreed definition exists across legal, technical, or regulatory contexts. The term encompasses diverse approaches to creating machine-based systems capable of performing tasks that typically require human intelligence. These tasks may include reasoning, learning, problem-solving, perception, language processing, and decision-making.

The European Union's Artificial Intelligence Act (Regulation (EU) 2024/1689), which represents the world's first comprehensive legal framework for AI, defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".

It is important to note that the EU AI Act's definition specifically requires "inference" as a key criterion. This requirement was deliberately included to define the scope of the legislation and distinguish AI systems from conventional software that operates through predetermined algorithms (such as simple "if-then" rules). However, this regulatory definition does not encompass all systems that might be considered artificial intelligence from a technical perspective. Some AI systems, particularly rule-based expert systems or deterministic algorithms, may not involve inference in the manner contemplated by the Act, yet could still be regarded as forms of artificial intelligence in broader technical or academic contexts.

Classification by Scope and Capability

!AI Classification Diagram

AI systems are commonly categorised into three broad types based on their scope and capabilities:

Artificial Narrow Intelligence (ANI), also known as Weak AI, refers to AI systems designed to perform specific, well-defined tasks. These systems excel within their designated domain but cannot generalise beyond their programmed parameters. Examples include image recognition systems, language translation software, and recommendation algorithms. The vast majority of AI systems currently in use fall within this category.

Artificial General Intelligence (AGI), also termed Strong AI, describes hypothetical AI systems that would possess human-level cognitive abilities across diverse domains. Such systems would demonstrate the capacity to understand, learn, and apply intelligence to solve problems across multiple disciplines with the same versatility as human intelligence. AGI remains a theoretical concept and does not currently exist.

Artificial Superintelligence (ASI) represents a speculative form of AI that would surpass human intelligence across all domains, including creativity, problem-solving, and social intelligence. This concept remains within the realm of theoretical discussion and future speculation.

Generative AI refers to AI systems capable of creating new content, including text, images, audio, or video, based on patterns learned from training data. These systems, exemplified by large language models and image generation tools, have gained particular prominence in recent years and are subject to specific provisions within the EU AI Act.

Fundamental Components

Modern AI systems are built upon three essential components: compute, algorithms, and data. Compute refers to the computational processing power required to train and operate AI systems, typically involving specialised hardware such as graphics processing units (GPUs) or tensor processing units (TPUs). Algorithms encompass the mathematical models and computational methods that enable AI systems to process information and make decisions. Data constitutes the information used to train AI systems, enabling them to learn patterns and make predictions or generate outputs.

Regulatory Context

The EU AI Act adopts a risk-based approach to AI regulation, classifying systems into four risk categories: unacceptable risk (prohibited), high risk (subject to strict requirements), limited risk (transparency obligations), and minimal risk (largely unregulated). This regulatory framework reflects the growing recognition that AI systems pose varying degrees of potential harm depending on their application and deployment context.

The definition and classification of AI systems continues to evolve as technology advances and regulatory frameworks develop. Legal practitioners should note that AI terminology and its regulatory treatment may vary across jurisdictions and continue to develop as the technology matures and its applications expand.

Sources

European Union, Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Official Journal of the European Union, L 1689, 12 July 2024.