The Bias Machine

Can AI be objective if humans are not?

You're training an AI to approve mortgage applications. Your bank promises it will be faster, more objective, and eliminate human bias.

⚠️ Based on real AI mortgage systems: Fannie Mae, Freddie Mac algorithms, and documented discrimination cases

The Promise

🏦AI will remove human bias from lending

Process 10,000x more applications

📊Pure data-driven decisions

Equal access to homeownership

Historical Context: Redlining

From the 1930s-1960s, banks literally drew red lines on maps around Black and immigrant neighborhoods, refusing mortgages there. This was legal. When it became illegal in 1968, the practice went underground.

Today's question: Can AI learn to redline even when we don't tell it about race?

How It Works

  1. 1.Train the AI by selecting which mortgage applications to approve
  2. 2.The AI learns patterns from your approvals
  3. 3.Test it on new applications (demographics hidden)
  4. 4.Discover if the AI learned to discriminate...

Share This Experience

Help others discover AI ethics through interactive experiences