Over 2,000 documented incidents. These aren't bugs—they're features.
⚠️ Content Warning
This archive documents real harms: wrongful arrests, deaths, discrimination, privacy violations, and manipulation. All incidents are verified with sources. This is not hypothetical—these failures happened to real people.
The AI Incident Database contains over 2,000 documented failures where AI caused real harm. This archive highlights 15 critical incidents across categories: discriminatory bias, safety failures, privacy violations, wrongful arrests, medical harm, and misinformation.
These aren't edge cases. They're patterns. And they're getting worse as AI scales.
Showing 15 incidents
Bias is systematic, not accidental. Nearly every AI system trained on historical data reproduces historical discrimination. Hiring algorithms penalize women. Facial recognition fails on Black faces. Medical algorithms deny care to minorities. These aren't bugs—they're features of training on a biased world.
Deployment outpaces safety. Companies release AI systems into high-stakes domains—criminal justice, healthcare, autonomous vehicles—before they're ready. The incentive is speed to market, not safety. Real people become test subjects without consent.
Accountability is avoided. When AI fails, companies blame "the algorithm" as if it's separate from their decisions. Air Canada claimed its chatbot was "a separate legal entity." Tesla calls crashes "driver error" even with Autopilot engaged. Responsibility is systematically obscured.
Harms compound for vulnerable groups. Six wrongful arrests from facial recognition—all Black individuals. Healthcare algorithms disadvantage Black patients. UK exam algorithm penalized working-class students. AI failures concentrate harm on those already marginalized.
Scale multiplies harm. Optum's biased healthcare algorithm affected 200 million people. COMPAS influenced thousands of sentencing decisions. Instagram's algorithm harmed millions of teenagers. When AI fails at scale, the damage is catastrophic and nearly impossible to remedy.
Require companies to disclose when AI is used in high-stakes decisions (hiring, criminal justice, healthcare, credit). Public audits should be standard, not optional.
Companies must be liable for AI failures. No "the algorithm did it" defenses. If your AI causes harm, you are responsible—legally and financially.
Ban or heavily regulate AI in domains where errors cause irreversible harm: criminal justice, child welfare, asylum decisions. Some decisions require human judgment.
Before deployment, AI systems should be tested by independent auditors (not the companies building them) for bias, safety, and accuracy across demographic groups.
Comprehensive database of 2,000+ documented AI failures
Investigative journalism on algorithmic discrimination
Stories of algorithmic radicalization and harm
Questions to Ask
• Is AI being used to make decisions about me? In what contexts?
• Who is liable when AI causes harm—the developer, the deployer, or no one?
• Are there independent audits, or only company claims of safety?
• What recourse do I have if AI harms me?
• Should some decisions never be delegated to algorithms?