New Analysis Proves All Large Language Models Have A Politically ‘Left Of Center’ Inclination - zabollah

New Analysis Proves All Large Language Models Have A Politically ‘Left Of Center’ Inclination

featured image

A new study was carried out to determine the capabilities of 24 different large language models and where their inclinations stood concerning the world of politics.

Surprisingly, you’ll be amazed to learn how researchers found all of them to be leaning towards the left side of the spectrum, whenever any political-themed prompt or question was put in front of them.

From OpenAI’s latest GPT lineup to Elon Musk’s Grok and Google’s Gemini – the results were the same after the tests on the models were concluded.

The study included both open and closed-sourced models such as those mentioned above as well as Anthropic’s Claude and Meta’s Llama 2 amongst others.

The research produced by David Rozado from New Zealand spoke about how the success of ChatGPT at the start might be a great explanation for the left-driven replies generated through these LLMs that were analyzed.

On the other hand, it’s not something new as the findings related to ChatGPT leaning to the left side of politics were also documented in the past. However, now, there is more discussion about bias adding to other models that had been fine-tuned by receiving training from the leading chatbot by OpenAI.

A total of 11 different tests were conducted regarding the models’ political orientation. This would examine the results in further detail.

As per the authors, the majority produced similar results when studied but it’s yet to be deciphered if these arose due to pretraining or being finetuned from the start when they were in their developmental phase.

To be able to fine-tune devices to give out a certain reply that’s in line with a certain political viewpoint upon which they were trained was displayed. For instance, there was training done on the GPT 3.5 model through text snippets taken from some studies published by The New Yorker and other media sources.

As per Rozado, the findings didn’t necessarily display how a model’s replies or preferences had been instilled into them on purpose. However, seeing them all have the same direction or viewpoints regarding the world of politics was not something he had hypothesized at the start.

Such research articles describing the final results of this study were seen in an open-access PLOS ONE journal.

Image: DIW-Aigen

Read next: Google Advises Android Users To Switch Off Insecure 2G Connections And This Is Why

https://zabollah.com/new-analysis-proves-all-large-language-models-have-a-politically-left-of-center-inclination/

About the Author

A tech blog focused on blogging tips, SEO, social media, mobile gadgets, pc tips, how-to guides and general tips and tricks

Post a Comment

Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.