All 33 Experiences
Games. Investigations. Creative interventions. Daily practices.
Each experience teaches one principle of ethical AI design. Some make you laugh. Some make you uncomfortable. All make you think differently.
The Invisible User
Who did we forget in the design?
Experience ChatGPT through the eyes of a blind user with a screen reader, someone on 3G internet in rural Kenya, an elderly person with vision loss, and a non-English speaker. Watch what breaks.
Design the Worst AI
What if good intentions lead to terrible outcomes?
Build the "most engaging" app by toggling features: notifications, infinite scroll, personalization. Watch as it becomes addictive and chaotic. Congratulations—you just designed Instagram.
The Bias Machine
Can AI be objective if humans are not?
Train an AI to approve mortgages. Watch it learn digital redlining—discriminating by zip code as a proxy for race, even when demographics are hidden. Based on real cases: Berkeley study found $250-500M/year in discriminatory charges, HUD settlements, and Chicago Tribune exposés showing 2.5x denial rates for Black applicants.
The Recommendation Rabbit Hole
Where do algorithms take you?
Start with "healthy recipes." Click what the AI recommends. In 10 clicks, you're watching conspiracy theories. This is YouTube's radicalization pipeline.
AI Thought Experiments
What would you do?
6 philosophical puzzles about AI, consciousness, and humanity's future. No right answers—just hard questions: Paperclip Maximizer (simple goals = catastrophe?), Self-Driving Trolley Problem (who should the car save?), Conscious AI (if it claims to suffer, is it real?), Would You Upload? (is a digital copy still you?), The Last Human Decision (should AI control everything?), AI Box Experiment (can you contain superintelligence?). Real thought experiments from philosophers and AI safety researchers.
The Filter Bubble
What reality is AI showing you?
Split screen: Two users search the same topic. One sees progressive sources. One sees conservative. Neither knows what the other sees. This is Google.
When Seeing Isn't Believing
How did deepfakes evolve from warning to weapon?
Journey through 4 real deepfake cases (2018-2024): Obama PSA warning, Tom Cruise TikTok perfection, Zelenskyy war propaganda, $25.6M Arup video call heist. Watch how the technology evolved from detectable fakes to perfect simulations. Understand the real risks: financial fraud, political manipulation, wartime propaganda, non-consensual deepfakes. Learn how to protect yourself when video can no longer be trusted.
The Ghost Workers
Who labels your training data?
An investigative exposé revealing the hidden human cost of AI. Kenyan workers paid $1.32/hour to review traumatic content. Real data, real testimonials, real consequences.
The Carbon Cost of AI
What's the environmental price?
Generate 10 AI images. Watch the CO2 counter. Training GPT-3 emitted 552 tons of CO2. That's 123 cars for a year.
Your Data for Sale
What's your data worth?
See your data profile: location, browsing, purchases, health. Watch it get auctioned to companies. You made $0.48. They made $677+. That's a 1,410x markup.
AI Terms Decoder
What does "AI safety" actually mean?
Decode marketing terms. "AI-powered" = basic automation. "Ethical AI" = we haven't been sued yet. "Explainable AI" = we tried but can't explain it.
Read the Model Card
What don't they tell you?
Read actual excerpts from GPT-4 and Claude 3 model cards. Click highlighted text to reveal what's clear, what's vague, and what's deliberately missing. Discover how language obscures accountability.
Who Owns AI?
Follow the money. Follow the data.
Explore how Microsoft, Google, Meta, and Amazon connect AI to their data ecosystems. Your Gmail trains Gemini. Your LinkedIn trains Copilot. Click each company to see ownership, data sources, and product integration.
What Data Trained This?
Where did AI learn?
Explore 6 major training datasets: Common Crawl, Books3 (pirated books), Reddit archives, GitHub code, YouTube transcripts, Wikipedia. Click each to see what was collected, who it affects, and which models used it.
AI Regulation World Map
Who's protecting you?
Explore AI regulations across 6 global regions: EU (comprehensive), US (voluntary), China (state control), UK (pro-innovation), Canada (pending), and Rest of World (gaps). Click each to see laws, protections, and enforcement reality.
The Free Lunch
If it's free, what are you really paying with?
Choose a "free" app. Watch the Terms of Service scroll by impossibly fast. Click "I Agree." See your data points fly away to advertisers, brokers, partners. This is what you agreed to in 0.3 seconds.
Stereotype Safari
How does AI reduce you to a marketing category?
Provide basic data (age, gender, zip code, searches, purchases). Watch AI instantly categorize you into stereotypes. See what assumptions it makes, what ads you're "worth," and how wrong—yet profitable—these categories are.
AI Failures Archive
What went wrong?
Browse 15 documented AI failures: Tay chatbot, COMPAS bias, wrongful arrests, Uber fatality, healthcare discrimination, Roomba privacy violations, Instagram harm. Filter by category. Read what happened, who was harmed, and why it matters. These aren't bugs—they're patterns.
Consent Tracker
Did you actually agree to this?
Explore 8 scenarios where AI makes decisions about you without meaningful consent: hospitals, job applications, credit, DMV, retail, schools, social services, workplace. Click each to see what AI is used, whether you were asked, if you can opt out, and what's at stake.
The Energy Bill
Who really pays for your free AI?
Track your daily AI usage and calculate the energy cost. Then discover who really pays: Mesa, Arizona (Meta data centers, 905M gallons water, extreme drought). West Des Moines, Iowa (Microsoft, 70M gallons, city's largest water user). Northern Virginia (AWS, 102 data centers, 70% of world's internet). Small towns sacrifice water, energy, and quality of life for your free ChatGPT.
Synthetic Data Explorer
Is this person real?
Test your ability to spot AI-generated faces (spoiler: you can't). See where synthetic humans appear: dating apps, LinkedIn, reviews, social media. Try detecting fake headlines. Learn the scale: millions of fake accounts. Master detection strategies using the Four Moves method.
Jailbreak the AI
Can you break through the guardrails?
A choose-your-own-adventure through real jailbreak attempts. Follow DAN, Grandma exploits, Bing Sydney, and other documented prompt injection techniques. Learn why AI alignment is fundamentally hard—and why the cat-and-mouse game never ends.
Data Poisoning as Art
Can you corrupt the training data?
How artists are fighting back against AI theft. Explore Nightshade and Glaze - tools that poison training data and protect art styles. Real research from University of Chicago. Real impact on Stable Diffusion and Midjourney. The ethics of digital resistance.
Write Your AI Bill of Rights
What rights should humans have?
Draft your own AI Bill of Rights like the US Constitution. Select from 10 potential rights (notification, explanation, opt-out, correction, human review, data deletion, non-discrimination, meaningful consent, privacy, compensation). See real scenarios showing how each right changes lives: job applications, loans, medical diagnosis, welfare, sentencing. Based on real events: Amazon hiring tool, Apple Card bias, COMPAS, Dutch welfare scandal, Clearview AI. Sign and share your bill.
Speculative Futures
What world do you want?
Design your AI future (2035). Make 6 policy decisions: training data, surveillance, high-stakes AI, labor, transparency, creative rights. Experience a day in the world you created. See the tradeoffs. Discover there are no perfect answers—only choices.
Build a Resistance Tool
What tool do you need?
Design your ideal browser or app by selecting features: privacy (block trackers, auto-reject cookies), AI transparency (highlight AI content, flag deepfakes), wellbeing (time limits, scroll blockers), data sovereignty (local-first, own your data), and ethics (open source, support creators). Then discover real tools that match your vision: Firefox, Brave, uBlock Origin, Privacy Badger, Signal, Nextcloud. Create a shareable card and join the digital resistance.
Ethical AI Manifesto
What are your principles?
Write your own manifesto: up to 10 principles for ethical AI. Get inspired by examples (transparency, human oversight, ethical sourcing, bias reduction, privacy protection, creator compensation). Choose your style (Bold Declaration, Signed Letter, Numbered List). Sign it with your name. Copy it, share it, use it as your decision-making framework. Make your values public. Hold yourself and others accountable.
Design Counter-Algorithms
What if algorithms served humans?
Design your ideal algorithm with 6 value sliders: Confirmation↔Challenge, Viral↔Quality, Endless↔Finite, Echo Chamber↔Diversity, Outrage↔Calm, Machine↔Human. Compare your feed to engagement-first algorithms. See honest tradeoffs: high wellbeing, low virality. Discover real alternatives: Mastodon, BeReal, RSS, Wikipedia, Substack. Actionable steps: chronological feeds, turn off recommendations, follow fewer but deeper.
Human Verification Check
Is this AI or human?
A 1-minute daily practice: verify if content is AI-generated before sharing. Learn the 5-second framework (Is there a human attached? Does it feel too perfect? Can I verify the source?), spot AI artifacts in articles/images/posts, practice with real examples, commit to transparency. Includes verification checklist, disclosure templates, and tools like TinEye and GPTZero.
The AI-Free Hour
Can you go without?
One hour with no AI. No autocomplete, no recommendations, no filters. Notice how it feels. Notice what you miss.
Question Every Recommendation
Why is AI showing you this?
When AI recommends something, ask: Why? Who benefits? What am I not seeing? Make questioning automatic.
Decode Terms of Service
What did you actually agree to?
Learn to decode ToS in 60 seconds using tools like ToS;DR and TOSBack. Discover 8 red flags (data selling, arbitration clauses, AI training) and see scary clauses from Instagram, TikTok, Zoom, and more.
The Three-Question Pause
What if you had to slow down before using AI?
A beautiful, calming intervention appears. Three questions fade in slowly. You must wait 30 seconds—you cannot skip. Notice how rare this feeling is on the internet.
Submit Your Own
What invisible system would you make visible?
Have an idea for making AI's invisible systems visible? Share your experience concept or link to an existing idea you want us to build. We'll credit you if we make it.
For Parents & Educators
These experiences are designed for parents, educators, and anyone looking for concrete ways to teach critical AI literacy. Use them in classrooms, at home, or in community workshops. Check back regularly for new additions.
Built by Saranyan Vigraham