Bias against muslims in GPT-3. Is the info regarding language model used in government chatbots available?

" It has been observed that large-scale language models capture undesirable societal biases, e.g. relating to race and gender; yet religious bias has been relatively unexplored. We demonstrate that GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias."

Researcher’s from Stanford and McMaster University has shown bias against Muslims in GPT-3. Previous works have shown race and gender based bias in other language models.

With private companies and governments increasingly using NLP, we need to know what language model they are using, and how diverse groups in India are represented in those language models.

Research paper: