Bots with Bias: Gender-Indexed Politeness in AI Chatbot Outputs
DOI:
https://doi.org/10.63954/WAJSS.5.1.28.2026Keywords:
Artificial Intelligence, Politeness Theory, Gendered LanguageAbstract
Human communication has also grown beyond face to face and computer-based interaction to include communication with machines with the fast development of conversational artificial intelligence. Such a change has brought up an issue regarding the way in which chatbots formulate politeness, interpersonal tone, and gendered linguistic behavior. The literature has discussed the bias against gender in AI systems, politeness in human-AI interaction, and how AI training data affect stereotypical responses, but there is a relative paucity of studies that explicitly compare the linguistic response of AI chatbots to male-coded and female-coded users. This paper fills that gap by examining lexical, pragmatic, and politeness-based differences in responses that were produced by various versions of ChatGPT. Based on the Politeness Theory by Brown and Levinson (1987) and the Gender and Language theory by Holmes (1995), the study will examine the hypothesis on whether AI replicates gender tendencies in positive/negative politeness, mitigation, hedging, and conversational style. Qualitative design was applied in which gender-coded prompts were then entered into ChatGPT and the results analyzed using word-frequency, contextual interpretation, and thematic comparison with politeness-marketing coded responses. The results demonstrate apparent differences: prompts marked as male were approached more directly, more concisely, and task-focused, whereas prompts marked as female tend to get much warmer and richer in detail, including more occurrences of positive politeness and indirectness. These tendencies imply that AI chatbots fail to assume an apolitical communicative position but rather reflect culturally transmit-ted gender standards that are represented in their training material. This research paper will add to sociolinguistic literature on non-human communicators and will indicate the necessity of more equitable, stereotype-free AI language systems.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Maliha Kalsoom, Fatima Hasan Zai, Sassui Afzal, Kaukab Saba

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright and Licensing
Publication is open access
Creative Commons Attribution License - CC BY- 4.0
Copyrights: The author retains unrestricted copyrights and publishing rights
