Alex Kladitis
Key Insight: AI Bias in News Reporting
Alex discovered subtle but significant bias built into ChatGPT when attempting to get neutral summaries of Chinese news sources.
Main Points
Experiment Setup:
- Created automated systems to scan various content (finance pages, medical sites, etc.)
- Repurposed one to scan Chinese newspapers and websites
- Simple task: "Tell me what the Chinese are talking about"
Unexpected Results:
- ChatGPT would not provide neutral summaries
- Always added commentary suggesting Chinese sources might be "wrong" or "not true"
- Built-in interpretation/commentary despite instructions to just report
- Tried repeatedly to get pure summaries without commentary - unsuccessful
Significance
Comparison to DeepSeek:
- When DeepSeek (Chinese AI) emerged, Western press criticized it for being "biased towards pro-China stuff"
- DeepSeek censored topics like Tiananmen Square
- "Not behaving by our standards"
Key Realization: "Our AIs are also quite opinionated in certain directions, particularly that one."
Core Insight: The bias is "inbuilt into the training data" and surprisingly pervasive. Even when specifically instructed not to add commentary, the AI couldn't provide neutral reporting.
Context
Alex has experience building automated scanning systems and custom AI bots. Collaborating with Gil Friend on sophisticated custom bot development.
Related Topics
Historical Context
Also contributed to call opening discussion about Sweden's switch from left-hand to right-hand traffic (Dagen H).
Back to README