AI Bias and Perspectives
Note: This was Alex Kladitis's check-in topic during the OGM 2025-11-06 call. This page provides context on the issue of AI bias.
Alex's Discovery
When Alex tried to get ChatGPT to summarize Chinese news sources without commentary:
- The AI consistently added editorial comments
- Suggested why Chinese information might be "wrong" or "not true"
- Could not provide neutral summaries despite explicit instructions
- Bias appeared "inbuilt into the training data"
The Parallel: DeepSeek
Western media criticized the Chinese AI "DeepSeek" for:
- Being "biased towards pro-China stuff"
- Censoring topics like Tiananmen Square
- "Not behaving by our standards"
Alex's Key Insight
"Our AIs are also quite opinionated in certain directions, particularly that one."
What is AI Bias?
AI systems can exhibit bias through:
- Training data reflecting societal biases
- Curation choices about what data to include
- Reward models that encode values
- Safety filters that make value judgments
- Cultural assumptions embedded in design
Types of Bias Discussed
Geographic/Political
- Western vs Chinese perspectives
- Democracy vs authoritarian framings
- Different historical narratives
More Subtle Than Expected
Alex was "surprised to see how... obviously, in my opinion, it's inbuilt into the training data" and how pervasive it was.
Implications
- No AI is truly "neutral"
- All training reflects choices and values
- Users may not recognize bias in familiar perspectives
- Need for diverse AI systems
- Importance of transparency about limitations
Related Concepts
Back to README