When AI Learns From Culture
A child in Brazil shares her lunch without a second thought. A young man in Seattle hesitates before giving up his seat. These tiny moments hold a secret that AI has never truly mastered: the unwritten rules of human culture and the values they carry. But a new study suggests AI might finally be able to learn those rules—not by being programmed, but by watching us.
That discovery, researchers say, could reshape how we build AI for hospitals, schools, disaster response teams, and even online communities. But here’s where it gets interesting…
When Culture Becomes Data
We’ve long trained AI models on everything and anything: books, websites, social media posts, code. This “one-size-fits-all” approach makes AI powerful—but also strangely blind. It treats every human as if they shared the same norms, expectations, and ideas of fairness.
The team behind this study asked a provocative question: What if AI didn’t need a universal moral code? What if it could learn the values of the people right in front of it—just like a child growing up inside a culture?
To test that idea, researchers designed a simple cooking game inspired by Overcooked. Two players stood in identical kitchens separated by a single bridge. One had easy access to onions (the key ingredient); the other had a much harder path to them. Sharing an onion meant helping the other player—but at the cost of precious time.
A perfect setup for studying altruism.
And then the twist: The team compared the behavior of two groups—Latino and White U.S. participants—drawing on decades of research showing differences in collectivism and helping behavior across cultures.
The pattern was clear: Latino participants shared more, especially when they were first helped. White participants shared less on average. These aren’t stereotypes—they’re measured behaviors inside a controlled game, showing how cultural norms can appear even in short digital interactions.
The Breakthrough: AI Learns Cultural Altruism Through IRL
Instead of simply copying behavior, the researchers used inverse reinforcement learning (IRL)—a method that enables AI to infer the hidden “reward system” underlying human actions.
Think of it this way:
- If you always choose to help a coworker, IRL infers that “helping” must feel rewarding.
- If you consistently avoid unfair situations, IRL learns you value equity.
This AI doesn’t just mimic you—it tries to understand why you act the way you do. The team built IRL models for each cultural group. What emerged was stunning:
AI trained on Latino participants learned more altruistic reward values.
AI trained on White participants learned less altruistic reward values.
In other words, AI adopted the cultural tendencies of the humans it observed.
The effect held even when researchers changed the kitchen layout—sometimes making sharing harder or easier. The AI still made decisions consistent with the cultural values it learned.
But the biggest shock came next.
When AI Uses Cultural Learning in a Completely New Scenario
Researchers gave the AI a totally different problem: Should the AI keep or donate money to a struggling partner, knowing both face unpredictable expenses?
There were no onions. No kitchens. No bridge. Yet, AI trained on Latino behavior donated more money.
AI trained on White behavior donated less. Fully altruistic and fully selfish AIs behaved as expected.
This is “second-order generalization,” the holy grail of AI ethics. The AI wasn’t copying a game—it was using culturally shaped values to navigate a new moral choice.
That’s the moment the researchers realized: AI may be able to learn real cultural values, just as children do—through observation, interpretation, and experience.
Why It Matters for the World Beyond Labs
If you’re reading this from Lagos, Delhi, Philly, Doha, or Manila, here’s the key question:
Would your AI understand your culture’s norms—or someone else’s?
This study suggests a path toward AI that adapts to local expectations:
- In community health clinics, AI could learn local helping norms.
- In classrooms, AI tutors could adjust to cultural expectations around collaboration.
- In disaster response, AI could learn when sharing scarce resources aligns with cultural practices.
- In workplaces, AI assistants could avoid defaulting to Western individualistic norms.
But here’s where things get complicated.
What if the culture includes harmful biases?
What if an AI mimics discrimination?
What if bad actors manipulate cultural learning to gain trust?
The researchers warn that filters and safeguards must be added, just as we guide children away from harmful learned behaviors. Ethical oversight isn’t optional—it’s essential.
The Global Stakes
We’re entering a world where AI isn’t just answering questions—it’s making decisions, resolving conflicts, distributing resources, and mediating human relationships.
A universal moral code for AI sounds appealing, but it may be impossible. Human cultures are wonderfully diverse. Values shift across borders, cities, families—even over time.
This study offers a different vision: AI that learns values the way humans do—by growing inside a culture, not above it.
But this raises a final question: Who chooses the culture AI learns from? And how do we ensure that learning benefits everyone?
Let’s Explore Together
The study opens more doors than it closes. So I’ll leave you with questions for reflection or discussion:
- Would you want AI in your community to learn local cultural values—or follow a global standard?
- If you were on this research team, what behavior would you train AI to learn next? Empathy? Fairness? Cooperation?
- What everyday problem in your life or community do you wish AI could understand better?
Your answers might shape the next generation of culturally aware AI.


