AI has become a common tool for generating written content, reports, and marketing copy. However, according to new research, it may be the case that the content you’ve generated inadvertently contains political opinions that don’t necessarily align with your own.
In this article, we explore which way AI models typically lean, as well as the challenges associated with political bias and how it can be remedied. Keep reading to learn more.
Which Way Does AI Lean?
Research conducted by the United Kingdom’s University of East Anglia identified a left-wing bias in the generative AI chatbot ChatGPT, favoring Joe Biden and the Democratic Party, along with UK’s liberal party, Labour. Despite this partiality, when asked about its political opinions, ChatGPT declares that it has no political opinions or beliefs whatsoever.
The bias, however, was determined through a process of asking ChatGPT to play the part of several types of people across the political spectrum – a “democrat voter,” for example. Then, researchers asked each persona a set of over 60 ideological questions 100 times, accounting for an expected level of AI randomness.
The data collected from these questions allowed researchers to identify patterns of bias similar to how they might detect bias in a human population – through trends and how people view certain values. Upon completing this experiment, it became clear that ChatGPT, and likely many other AI chatbots, lean toward the left-wing.
How Did AI Become Politically Biased?
Technically, artificial intelligence is not politically biased because it holds no personal opinions or beliefs. Instead, AI chatbots regurgitate political values unwittingly attained while scraping online training data. In other words, it’s humans who are biased; at least, those of us creating online content.
As a result, when ChatGPT scrapes online sources like social media posts and articles, it picks up on the nuances in opinion and replicates it in its responses. For example, if ChatGPT were to scrape ten left-leaning articles and only three right-leaning articles, it’s more likely to produce content that aligns with the former’s values.
Chances are, it’s not just ChatGPT that’s politically biased. In fact, most other generative AI chatbots are built on publicly available training data. This means it’s likely all AI chatbots have political bias to some degree, unless it’s trained out or training data is pre-processed to remove bias more thoroughly.
It’s Not Just Political Bias
This isn’t the first time we’ve seen bias in AI algorithms. Back in 2019, journalist Joy Buolamwini discovered that AI systems sold by tech giants like IBM, Microsoft, and Amazon had both gender and racial biases. For instance, when certain AI algorithms were tasked with identifying the gender and race of a white man, they did so with a 1-percent margin of error, while this figure increased to 32% when asked to do the same for an image of a black woman.
In Buolamwini’s experiments, she notes that AI algorithms were incapable of correctly identifying the race and gender of three notable black women: Oprah Winfrey, Serena Williams, and Michelle Obama. However, as AI has improved over recent years, so too has its ability to identify gender.
Now, using a free image to caption generator reveals that AI can identify a photo of Oprah Winfrey as “a woman with a black shirt;” Serena Williams as “a woman with a tennis racket;” and Michelle Obama, quite accurately, as “Michelle Obama first lady.”
Clearly, this technology still isn’t perfect, but when using it to analyze a picture of Judy Dench, it makes no reference to her race, only her gender. As a result, it’s possible that this particular algorithm is trained to make no mention of a person’s skin color.
Potential Consequences of Biased Data
Throughout this article, “political bias” has been used as a pejorative term, but why exactly is this such a serious issue? Here are a few potential negative consequences of AI data bias.
Inaccurate Representation: It’s possible that when using AI to generate content, such as social media captions or business reports, certain political views may be inadvertently represented. For example, left-wing sentiments could be expressed in an apolitical report or article, causing a knock-on effect on some businesses that bring their reputation into question.
Political Influence: There is an argument that AI political bias could have an impact on elections, as we discussed in a previous article. For instance, as generative AI is commonly used for quick and convenient research, votes may only be presented with one side of the political argument, influencing the decision-making process when voting.
Data Inaccuracy: Data bias can also lead to inaccurate responses and AI hallucinations, as machine learning algorithms may try to tell users the response it thinks they want to hear instead of basing their response on verifiable facts.
How Do We Fix Bias in AI?
With some effort and collaboration from AI model developers and lawmakers, it’s possible to significantly negate the negative effects of AI data bias and reduce its likelihood. Here’s how:
Vet Data Sources: Before feeding AI algorithms data, AI development companies can thoroughly vet their data sources, ensuring they offer high-quality and neutral information. For example, instead of feeding AI information from Wikipedia–a site notorious for inaccuracy–developers could feed their AI algorithms data from verifiable sources like medical journals.
Cite Response Sources: AI models can be developed to present their information sources when they generate responses containing data and information scraped from online sources. Fortunately, source citing is being implemented into some AI models, including Bing’s AI search, which shows users sources for the information it presents.
Refine AI Algorithms: AI model developers can continue to refine their algorithms, training them to identify political bias and take it into consideration when generating responses.
Better Regulation: Many artificial intelligence experts have spoken out about the importance of regulation. Essentially, regulating the permissions and data privacy surrounding the data used to train AI models will help refine which information is used.
Data bias is likely to be a prominent issue within the political and economic sphere as AI is integrated further into our daily lives, and as the 2024 presidential election approaches. However, as the above points suggest, there is a positive way forward in terms of curbing AI bias.
Thanks for reading.
If you enjoyed this article, please subscribe to receive email notifications whenever we post.
AI Business Report is brought to you by Californian development agency, Idea Maker.
Sources:
https://news.sky.com/story/chatgpt-shows-significant-and-systemic-left-wing-bias-study-finds-12941162#:~:text=%22Any%20bias%20in%20a%20platform,if%20they're%20completely%20wrong
https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
https://time.com/5520558/artificial-intelligence-racial-gender-bias/#:~:text=My%20research%20uncovered%20large%20gender,male%20faces%20than%20female%20faces.