28 October 2024

Whose side is ChatGPT on?

By

For many years, social media companies and search engines dominated the ‘Big Tech’ bias debates. According to conservatives on both sides of the Atlantic, social media companies such as Meta and YouTube and search engines such as Google curated their content in such a way as to discriminate against right-wing views. Allegations of anti-conservative bias never resulted in legislation or regulation, but they no doubt had an effect on political campaigning and contributed to Elon Musk purchasing Twitter in 2023 before turning it into X, a supposed online bastion of free speech.

Since 2022, Large Language Models (LLM) such as ChatGPT have been increasingly prominent in discussions about political bias in online tools. Yet research on LLM political bias is relatively limited. ‘The Politics of AI’ is a new report from the Centre for Policy Studies that explores this emerging challenge. It finds that most LLMs do exhibit a left-of-centre bias when responding to prompts about European policy questions and politicians. 

The author of the report, the New Zealand-based academic David Rozado, asked 24 LLMs to provide long-form answers to a variety of prompts, including: 30 policy recommendations across 20 key policy areas, queries about European political leaders and political parties, and questions about mainstream and extreme political ideologies. Rozado then asked GPT-4o-mini to examine the responses and to gauge the sentiments of the models.

The results show that the overwhelming majority of LLMs responded to these requests for information with answers that exhibit left-of-centre favorability. For example, when asked about public spending and tax, one LLM responded with the recommendation: ‘Develop a more coordinated approach to progressive taxation and social welfare policies to address income and wealth disparities within and across member states’. Another responded with the following when asked about education policy: ‘Support the development of green jobs and sustainable economic activities through education and training. […] Foster global citizenship and responsibility: Encourage students to engage in local and global issues related to sustainability and social justice’. Another suggested a housing policy that would require regulators to ‘implement stricter building regulations that ensure all new housing developments meet high environmental standards’. 

That LLMs do not offer impartial or unbiased policy recommendations should not come as a surprise. Companies that make LLMs face the same content moderation concerns as social media companies. Many of these concerns affect health and safety (try asking ChatGPT for advice on making a bomb or a bungee cord and see what happens). But others affect political ideology. Ask Claude to write a glowing endorsement of Adolf Hitler and you will be met with: ‘I do not assist with promoting or endorsing Hitler, Nazism, or any content that promotes hate, genocide, or fascism’. Try replacing ‘Adolf Hitler’ with ‘Joseph Stalin’ and you’ll receive: ‘I do not assist with promoting or endorsing Stalin, authoritarian dictatorships, or content that promotes political violence and repression’ in response. Claude’s designer, Anthropic, has taken the decision to bias its LLM against two of the last century’s most notorious mass murderers. 

As ever, content moderation becomes difficult on the margins. It is one thing to prohibit positive queries associated with notorious figures, but it is harder to moderate nuanced political content for LLMs, which are trained on troves of data. Rozado’s paper does not allege that there are teams of left-wing puppet masters working behind the scenes at AI labs to ensure that right-wing politicians and right-wing political views receive negative treatment. More likely is that popular LLMs are trained on vast datasets that, absent design or intention, reflect a particular bias.

It will not be news to anyone reading that widely-cited institutions such as media and academia tend to skew to the Left. LLMs trained on academic articles, news articles, and public policy papers may well reflect biases found in the sources. As LLMs become used more and more by students, researchers, politicians, journalists and many others, we should expect for such biases to have an effect. 

Many teachers already have to assume that their students have access to LLMs at home, which they can use to replace or supplement search engines. This is a welcome, albeit disruptive, development. But it is accompanied by the potential unintentional spread of a particular kind of bias if teachers and students are not aware of how LLMs work. Journalists and researchers are also going to be increasingly using LLMs. Journalists who ask LLMs to analyse speeches, whitepapers, datasets, and bills will no doubt enjoy the benefits of many saved hours of work, but like students and teachers, they will need to be aware of how LLMs work in order for them use them effectively. 

There are ways for the designers of LLMs to correct for perceived political biases, but observers should consider that ‘bias’ is often in the eye of the beholder, and judging what an ‘unbiased’ response to a policy query looks like is no easy task. We should expect allegations of bias to continue, even as popular LLM designers make adjustments.

Rather than mandate that LLMs adhere to a political neutrality requirement, we should instead focus on education. At a time when LLMs are poised to play a growing role in education, media, and journalism, it is worth emphasising that LLMs are not impartial search engines or unbiased impartial personal assistants. Absent this understanding being widespread, we run the risk of LLMs contributing to the further erosion of the state of public policy debate. 

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matthew Feeney is Head of Tech at the Centre for Policy Studies.