No menu items!

A.I. unmasked: the hidden political biases in leading language models like ChatGPT

Recent research has thrown light on the political biases inherent in AI language models.

Multiple academic studies suggest that, based on the data and algorithms used, these models may not be as neutral as intended.

According to MIT Technology Review, a comprehensive study led by researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University analyzed 14 AI models, including popular ones like ChatGPT by OpenAI.

When posed with politically charged statements, these models exhibited a range of biases. Notably, ChatGPT leaned towards left-wing libertarian views.

Another study, conducted by the University of East Anglia, specifically observed ChatGPT and found “significant and systemic” left-wing bias.

Photo Internet reproduction.
Photo Internet reproduction.

This study further supported its findings by analyzing ChatGPT’s responses to a series of ideological questions, confirming the chatbot’s favoritism towards the Labour Party and US Democrats.

The cause of these biases can be attributed to the extensive text data from the internet used to train these AI models.

Any inherent bias in the data can potentially influence the model’s outputs. Further, the algorithms used in the training process might amplify existing biases.

Dr. Fabio Motoki, the lead author of the latter study, voiced concerns over the implications of these findings, given the widespread use of platforms like ChatGPT.

Emphasizing the potential dangers, he stated, “Just as the media, the internet, and social media can influence the public, this could be very harmful.”

Meanwhile, Chan Park, from the former study, opined that no AI language model can be entirely free from biases.

The importance of these findings lies in their potential impact on products and services that millions interact with daily.

Biased AI models could inadvertently harm or offend users, further skewing the information landscape.

Tech companies developing these models have faced criticism for perceived biases.

OpenAI, for instance, has asserted its commitment to not favoring any political group and views any emerging biases as unintentional.

The rigorous examination of AI models’ political inclinations underscores the pressing need for increased transparency and scrutiny.

Both research teams hope their studies foster a broader awareness and conversation around the ethical implications of AI.

Check out our other content

×
You have free article(s) remaining. Subscribe for unlimited access.