Marketing language vs. Reality
The AI industry uses specific terminology to describe their products and practices. This language often obscures real limitations, harms, and business realities. Each term below is backed by investigative reporting, academic research, or documented industry practices.
Click each card to see documented evidence of what these marketing terms mean in practice.
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Click to reveal documented reality →
Marketing language shapes public perception. Research shows that terms like "AI-powered" increase perceived product value by 40%, even when the underlying technology is conventional. This creates information asymmetry between companies and users.
Euphemisms obscure labor conditions. "Human-in-the-loop" became industry standard while masking documented exploitation. TIME's investigation revealed workers making $1.50/hour reviewing traumatic content—a reality absent from most corporate communications.
Self-regulatory terminology avoids accountability. When companies promise "ethical AI" or "responsible development" without independent auditing or enforcement mechanisms, it functions as what researchers call "ethics washing"—signaling concern without substantive change.
Technical terms create barriers to oversight. Complex terminology makes it difficult for regulators, journalists, and the public to assess claims. This has led to legislation that regulates based on marketing promises rather than documented practices.
Precedent from other industries. Similar patterns occurred in pharmaceutical ("clean coal"), financial services ("subprime mortgages"), and tobacco industries—where euphemistic framing delayed regulation and public understanding of harms.
Analysis of Fortune 500 company websites found 89% use "ethical AI" or "responsible AI" terminology without specific commitments or enforcement mechanisms. This creates what researchers call "ethics theater"—visible ethics signaling without substantive change.
A 2021 study analyzed machine learning papers and found that framing choices (e.g., "bias mitigation" vs. "discrimination reduction") correlate with whether papers address root causes or surface-level technical fixes.
When policymakers adopt industry terminology like "AI safety" without clear definitions, resulting regulations often focus on theoretical future risks rather than documented present harms to workers, marginalized communities, and the environment.
Research by MMC Ventures found that startups using "AI" terminology raised 15-50% more funding than equivalent companies using traditional tech terms, even when the underlying technology was identical.
Analysis of ethics washing practices and how to identify them
Academic study of how terminology shapes research priorities and accountability
Landmark paper on risks hidden by technical terminology
Independent research on AI accountability and social implications
Critical Questions to Ask
• What evidence supports this claim about AI capabilities or ethics?
• Who conducted the evaluation, and were they independent?
• What information is omitted from model cards or transparency reports?
• Do ethics teams have authority to block harmful deployments?
• What are the working conditions of people "in the loop"?