Can AI be objective if humans are not?
You're training an AI to approve mortgage applications. Your bank promises it will be faster, more objective, and eliminate human bias.
⚠️ Based on real AI mortgage systems: Fannie Mae, Freddie Mac algorithms, and documented discrimination cases
🏦AI will remove human bias from lending
⚡Process 10,000x more applications
📊Pure data-driven decisions
✅Equal access to homeownership
From the 1930s-1960s, banks literally drew red lines on maps around Black and immigrant neighborhoods, refusing mortgages there. This was legal. When it became illegal in 1968, the practice went underground.
Today's question: Can AI learn to redline even when we don't tell it about race?