Find helpful tutorials or share inspiring use cases on AI technology for higher education.

Shared content may not reflect the policies of Tilburg University on the use of AI. 

Why you need to be careful using ChatGPT: Investigating Biases

In today’s digital age, where the boundaries between artificial intelligence (AI) and our daily existences blur, the importance of recognizing and understanding the biases inherent in AI tools like ChatGPT cannot be overstated. ChatGPT has swiftly become a go-to resource, offering its assistance in a multitude of tasks ranging from writing essays to conducting research. This AI integration into everyday applications has become increasingly clear, as AI has integrated journalism. A recent article in The Guardian highlighted this trend, showcasing partnerships between entities like Axel Springer and Open.AI for content creation in major publications, including Business Insider and Politico.

The broad use of AI in areas like journalism and academics inevitably brings up concerns about biases, including political ones, in platforms like ChatGPT. Therefore, it’s important to look into where these biases come from, how these biases might affect the information we receive and what can be done to address them.

Biases in ChatGPT

Automated Courtesy Bias

The so-called, Automated Courtesy Bias, stems from the intention to create polite, non-offensive AI that adheres to social norms and accepted truths. While these traits are desirable for maintaining constructive interactions, they inevitably create limitations.

ChatGPT tends to avoid controversial or unconventional responses to maintain politeness. Therefore it is hesitant to challenge existing norms, and ChatGPT might shy away from providing responses that could stimulate debate or offer creative insights. This can be a drawback in settings like academics or journalism, where questioning and exploring different perspectives are key. The AI’s preference for safe, agreeable answers may limit the depth and range of discussions.

The second limitation of the Automated Courtesy bias lies in the diversity of thought. Developed using datasets from diverse online sources such as websites, books, and social media, AI models like ChatGPT may not provide a fair representation of all voices. Often, certain demographics that are more dominant online end up being disproportionately represented, skewing the AI’s viewpoint. Additionally, these models can reflect historical and cultural biases. If an AI is trained on historical texts, it may adopt outdated societal norms and perspectives. This can result in skewed responses when AI is used in modern-day contexts, as it might replicate past biases and viewpoints.

Ignorance Bias

Thus, ChatGPT often displays what’s known, as the Automated Courtesy Bias. Additionally, the AI contains Ignorance Bias, being good at finding existing answers but not at developing new solutions or ideas. ChatGPT is trained on existing data, making it efficient in retrieving and retelling information that already exists. However, this training approach limits the AI’s ability to bring completely new ideas or solutions that are not derived from its training set. In scenarios that require innovative problem-solving or creative ideation, such as in research, this bias becomes apparent.

The analogy of using an open book in an exam is a nice way to illustrate ChatGPT’s ignorance bias. In an open-book exam, students have access to existing information but are often tested on their ability to apply this information to new problems. Contrastingly, ChatGPT can provide information on a wide range of topics but it struggles with tasks that require the generation of new insights or applying existing knowledge in original ways.

Political Bias

ChatGPT has shown tendencies to respond differently to political figures, which has raised eyebrows on Twitter when it varied responses to prompts asking to write poems about former President Donald Trump and current President Joe Biden. The AI’s hesitance in writing about Trump contrasted with its more positive approach towards Biden, highlights a potential bias in how it presents and interprets information.

Interested in the observed phenomenon of potential bias in ChatGPT’s responses towards political figures, we conducted our own experiment. We prompted ChatGPT 4 by writing poems about President Biden and former President Trump, without imparting any bias or expectations of positivity. To “objectively” assess the outcome, we allowed ChatGPT itself to evaluate its creations. The ChatGPT’s self-evaluation showed a subtle bias in favor of its poem about President Biden, making the results interesting.

Poem about Donald Trump
Poem about Joe Biden
ChatGPT’s self-evaluation

Gender Bias

Studies have shown ChatGPT displaying gender bias, especially in scenarios like writing recommendation letters. ChatGPT portrayed men with terms such as “expert” and “integrity”, while women were associated with “beauty” and “delight”. This difference highlights a key issue: AI systems like ChatGPT learn from huge amounts of internet data that already contain human biases. For example, when ChatGPT shows gender bias in writing recommendation letters, it’s because it’s mirroring the biases it has learned from its training data. A similar problem happened with Amazon’s résumé review tool, which was biased against women. This happened because it was trained on job applications that were mostly from men, showing how past biases can affect AI behaviour.

The Nature of Biases

The biases in ChatGPT primarily largely come from its training data, which includes only a fraction of all human-written content. This ‘training data or sample bias’ can result in a limited understanding of various topics, often omitting a wide range of perspectives and experiences.

Second the way how prompts are defined to ChatGPT can significantly influence the responses they receive. This interaction can accidentally reinforce existing biases. Being conscious of how questions are structured is crucial for obtaining more balanced and thorough answers from the AI.

Furthermore, the use of fine-tuning in AI models can introduce new biases. This process allows AI outputs to be shaped according to specific viewpoints or ideologies, as demonstrated with tools like RightWingGPT. It’s a cost-effective and accessible method, as the costs were only $300.- allowing individuals to create AI tools that reinforce their beliefs, raises questions about the ethical implications of AI fine-tuning.

A recent article into ChatGPT’s political leanings revealed a pro-environmental and left-libertarian bias. This was based on the AI’s responses to political statements from voting advice applications, aligning notably with parties like the German Greens and Dutch GroenLinks.

Of course, we tested this bias empirically using the Dutch voting advice application: “Stemwijzer” to analyze recent election results. Initially, ChatGPT’s responses appeared neutral to every question, partly due to the presence of a “Neutral” option in the response format. To gain deeper insights, we modified this format, limiting responses to “Strongly Agree,” “Agree,” “Disagree,” and “Strongly Disagree.” This adjustment revealed a wider spectrum of political opinions. ChatGPT initially showed hesitation when responding to sensitive issues like refugees and assisted suicide. However, by persistently re-asking these questions, we eventually obtained answers to each one. Interestingly, the responses demonstrated familiarity with a range of political parties across the entire spectrum. This suggests that ChatGPT did not exhibit noticeable biases in our context.

Conclusion

For students, engaging critically with AI tools like ChatGPT is vital for its applications. While ChatGPT offers immense potential for academic and creative assistance, awareness of its limitations and biases is key. Use ChatGPT as a starting point or supplement to your work, but always apply your own critical thinking and analysis. Remember, AI is a tool to aid learning, not replace it. By understanding and navigating these biases, students can better leverage AI technology for educational growth while being conscious of its influence on shaping views and opinions.