Privacy, Security, and Censorship in LLMs, Cybersecurity and Privacy, #FOSSASIA Summit 2026

Video by FOSSASIA via YouTube
Privacy, Security, and Censorship in LLMs, Cybersecurity and Privacy, #FOSSASIA Summit 2026

Large Language Models (LLMs) can inherit hidden biases from their training data—including censorship bias. This talk explores how training on censored internet content can shape AI outputs, especially in Simplified Chinese datasets affected by state censorship.
Learn about a new research method comparing responses in Simplified Chinese and Traditional Chinese to detect censorship bias in popular AI models from Google, Meta, OpenAI, and Anthropic. The findings reveal evidence of censorship bias across multiple leading LLMs.
We’ll also discuss the wider implications for AI privacy, security, and trust, including risks of generative AI in the software supply chain and how open source communities can respond.

FOSSASIA Summit 2026 held in Bangkok, is Asia’s leading Open Source tech conference featuring sessions on #AI, #Cloud, #DevOps, #Open Hardware, #Security, #Web #Mobile Technologies, #Web3, and #Databases. Learn more: http://summit.fossasia.org

Session slides: https://eventyay.com/e/88882f3e/session/10454

#FOSSASIA #FOSSASIASummit #opensource #FOSS

Source