Bots with Bias: Gender-Indexed Politeness in AI Chatbot Outputs

Authors

  • Maliha Kalsoom Department of English, International Islamic University, Islamabad – Pakistan
  • Fatima Hasan Zai Department of English, International Islamic University, Islamabad – Pakistan
  • Sassui Afzal Department of English, International Islamic University, Islamabad – Pakistan
  • Kaukab Saba Department of English, International Islamic University, Islamabad – Pakistan

DOI:

https://doi.org/10.63954/WAJSS.5.1.28.2026

Keywords:

Artificial Intelligence, Politeness Theory, Gendered Language

Abstract

Human communication has also grown beyond face to face and computer-based interaction to include communication with machines with the fast development of conversational artificial intelligence. Such a change has brought up an issue regarding the way in which chatbots formulate politeness, interpersonal tone, and gendered linguistic behavior. The literature has discussed the bias against gender in AI systems, politeness in human-AI interaction, and how AI training data affect stereotypical responses, but there is a relative paucity of studies that explicitly compare the linguistic response of AI chatbots to male-coded and female-coded users. This paper fills that gap by examining lexical, pragmatic, and politeness-based differences in responses that were produced by various versions of ChatGPT. Based on the Politeness Theory by Brown and Levinson (1987) and the Gender and Language theory by Holmes (1995), the study will examine the hypothesis on whether AI replicates gender tendencies in positive/negative politeness, mitigation, hedging, and conversational style. Qualitative design was applied in which gender-coded prompts were then entered into ChatGPT and the results analyzed using word-frequency, contextual interpretation, and thematic comparison with politeness-marketing coded responses. The results demonstrate apparent differences: prompts marked as male were approached more directly, more concisely, and task-focused, whereas prompts marked as female tend to get much warmer and richer in detail, including more occurrences of positive politeness and indirectness. These tendencies imply that AI chatbots fail to assume an apolitical communicative position but rather reflect culturally transmit-ted gender standards that are represented in their training material. This research paper will add to sociolinguistic literature on non-human communicators and will indicate the necessity of more equitable, stereotype-free AI language systems.

Downloads

Published

2026-03-29

How to Cite

Maliha Kalsoom, Fatima Hasan Zai, Sassui Afzal, & Kaukab Saba. (2026). Bots with Bias: Gender-Indexed Politeness in AI Chatbot Outputs. Wah Academia Journal of Social Sciences, 5(1), 532–558. https://doi.org/10.63954/WAJSS.5.1.28.2026