Publication Type

Journal Article

Version

publishedVersion

Publication Date

3-2025

Abstract

The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.

Keywords

Artificial intelligence, Chatbots, Gender bias, ChatGPT, Generative AI

Discipline

Artificial Intelligence and Robotics | Gender and Sexuality

Research Areas

Psychology

Publication

Computers in Human Behavior

Volume

4

First Page

1

Last Page

15

ISSN

0747-5632

Identifier

10.1016/j.chbah.2025.100145

Publisher

Elsevier

Additional URL

https://doi.org/10.1016/j.chbah.2025.100145

Share

COinS