Hallucinations

AI generating false but plausible-sounding content

DataModelsRisk
Updated 2 May 2025·Reviewed
Key Takeaway

When AI generates plausible-sounding but incorrect or fabricated content not grounded in the input data.

Definition

The generation of false or misleading content by an AI system, typically with high confidence, despite lacking grounding in the underlying data or facts. A chatbot stating a fictional court case as precedent.

Applications & Use Cases

Legal document generation

Client-facing chatbots

Policy drafting tools

Risks & Considerations

Misinformation in high-stakes domains

Loss of trust in AI outputs

Liability for generated content

All TermsBack to GlossaryNext TermHugging Face