Exploring Misogyny in the ‘ChatGPT Essay’

Written By Jessica Taylor

Edited By Colin Parker Griffiths 

Introduction 

The increasing popularity of ChatGPT in academic spaces has led to the phenomenon of the ‘ChatGPT Essay.’ Essay writing using artificial intelligence (AI) now poses a threat to teachers who become Turing detectors upon grading assignments, and to the process of essay writing in general (Marche, 2022). As I was curious about how the ChatGPT essay would sound, I decided to conduct a study to compare my own writing style grounded in feminist theory with that of ChatGPT’s by asking it to write an essay that would mirror one of my own entitled Netflix’s Creation and Destruction of the Killjoy. This essay I wrote detailed how the creators of Sex Education (2023) fail to demonstrate how Otis’ behavior is significantly misogynistic and racist. Using Ahmed’s theory of the feminist killjoy, I demonstrated how Otis—Sex Education’s protagonist—evades blame for his behavior, and Lorde’s theory that women (such as O) need a creative space to express themselves. Contrastingly, ChatGPT’s essay entitled Exploring Misogyny Through the Character of Otis in “Sex Education” glossed over Ahmed and Lorde’s theories, taking more time to consider Otis’ “good intentions” and “personal growth” to excuse his misogynistic behavior (ChatGPT, Appendix A). I shall argue that while my original essay was a feminist analysis, ChatGPT’s replication of my work was permeated with the misogyny and racism that is embedded into AI, filtering the information through a gendered lens to reproduce patriarchal popular narratives—in this case, that Otis’ character is relatable and forgivable. 

Study 1: Unveiling ChatGPT’s Misogynistic and Racist Biases

AI software such as the Google search engine and ChatGPT reproduce misogynistic biases not only because it reflects the biases of its creators, but also because the profit companies (such as Google) gain capital by maintaining the misogynistic and racist status quo. In the essay Searching for Black Girls, Dr. Safiya Noble (2012) describes the process of algorithmic oppression as structural inequalities that become reproduced in AI when algorithms consistently stereotype and degrade Black women (and by extension, oppressed and exploited populations) (p. 80). Noble (2012) highlights two culprits behind algorithmic oppression: Firstly, that AI (Google searches specifically) mirrors the already-present “historically uneven distributions of power in society,” leading to search results that reproduce harmful stereotypes, such as the sexualization of Black women by linking porn sites to the search “Black girls” (p. 87). Secondly, Noble (2012) argues that the porn industry benefits from perpetuating racism, sexism and misogynoir (a specific form of racialized sexism unique to the intersectional experience of Black women and femmes) (p. 87-88). Companies in the porn industry will gain more user traction, and thus more revenue if their results show up on the first pages of Google searches. Therefore, they co-opt terms such as “Black girls” and “Asian women” by linking them to sexualized terms such as “pussy”, thereby contributing to racist stereotypes that were already present in order to gain profit (Noble, 2012, p. 87). 

ChatGPT’s essay also replicated harmful and misogynistic stereotypes to produce a narrative that will be profitable to Netflix. The essay briefly mentions that “Otis displays discomfort when confronted with feminist perspectives that challenge his preconceived notions,” which ChatGPT describes as “align[ing] with aspects of misogyny” (ChatGPT, Appendix A). ChatGPT’s use of passive voice disconnects Otis from the misogynistic behavior he exhibits, which removes him from taking accountability. The essay eventually concludes that Otis’ “character arc showcases a journey of self-reflection and growth,” which is a highly debatable claim that chooses to sympathize with Otis and undermine the extent of his misogyny (ChatGPT, Appendix A). Therefore, ChatGPT takes the perspective of a privileged man by viewing Otis as a flawed but relatable character, rather than thoroughly critiquing his actions as exhibiting the practice of misogyny. 

I believe that ChatGPT’s essay contributes to the stereotyping of women and people of color in Sex Education (2023). For example, Yasmin Benoit, the creator of O (Otis’ rival who is an asexual woman of color) has expressed her disappointment that O’s character was edited to appear more antagonistic towards Otis than she originally intended (Iftikhar, 2023). Rather than portraying O as Benoit intended—a helpful character who shows Otis his misogynistic biases, producers vilified O’s character. This edit is more forgiving of Otis, as it portrays O as an antagonistic character rather than one who reveals Otis’ flaws. However, it leans on harmful stereotypes of asexuals as ‘disconnected from others’ and ‘evil’ (Iftikhar, 2023). Unsurprisingly, ChatGPT’s essay conforms to this narrative, which regurgitates stereotypes of the asexual villain, and replicates whatever perspective that will give Sex Education the most appreciation from a privileged audience. This provides further evidence to Noble’s argument that AI systems will reproduce harmful stereotypes if doing so will create profit for large corporations as ChatGPT creates a narrative sympathetic for privileged male Netflix users. 

From this brief study, I conclude that even when given the opportunity to discuss feminist thought, ChatGPT can only demonstrate a simple overview of these concepts. For example, ChatGPT states that Otis portrays “the discomfort Ahmed describes in the face of feminist dissent”, yet does not describe how Otis mirrors the discomfort among many white, straight men who struggle to come to terms with their male and heterosexual privilege (ChatGPT, Appendix A). Hence, ChatGPT fails to grasp how Otis exemplifies men being confronted by a ‘feminist killjoy’ as Ahmed describes, which leads me to conclude that ChatGPT cannot adequately identify how feminist concepts such as Ahmed’s and Lorde’s could be used to critique Otis’ character. Instead, ChatGPT is more adept at regurgitating the misogynistic and acephobic stereotypes that align with a privileged male narrative. 

Study 2: Investigating ChatGPT’s Non Consensual Data Gathering Methods 

I decided to conduct a second part to this study in order to see whether ChatGPT producing misogyny was an isolated case. Could ChatGPT employ feminist concepts such as consent when directly asked? More broadly, is ChatGPT honest? I asked ChatGPT how it collects its data, wondering if its response will discuss the ‘consent’ of those whose data bodies GPT has access to (Varon & Peña, 2021, p. 3). The reply was brief, essentially stating “I don’t gather data myself […] However, data can be gathered in various ways: through surveys, interviews, experiments…” (ChatGPT, Appendix B). This reply confirmed my suspicions that ChatGPT does not prioritize honesty, but instead curates answers based on what will maintain the unequal status quo––much like the ChatGPT essay demonstrates a privileged and patriarchal perspective. I shall argue that ChatGPT is programmed to describe its data collection vaguely in order to continue capitalizing off marginalized groups. According to Joana Varon and Paz Peña (2021), popular AI databases such as Alexa and ChatGPT are built from data taken nonconsensually from marginalized groups, exacerbating their marginalized status (p. 3). For example, the “datification of anti-poverty programmes” are designed so that those in need can only access social program benefits and resources if they consent to the use of their data for online program development (Varon & Peña, 2021, p. 5). From my perspective, consent given only on the basis of accessing needed assistive services is not consent at all. Thereby, data collection systems target the poor (and by extension, otherwise marginalized individuals) by dismissing their right to give uncoerced consent, thereby reaffirming these marginalized individuals status as marginal. 

ChatGPT fails to describe how poor and otherwise marginalized groups are targeted in the data-gathering processes, allowing tech companies to evade responsibility for their actions. Firstly, ChatGPT’s response uses first-person language, stating “I’m designed to work with the information I’ve been trained with,” which characterizes ChatGPT as an agent. When picturing ChatGPT as agentic in its training, users are made less likely to consider the programmers and tech companies who are actually responsible for training ChatGPT’s and collecting data. Moreover, ChatGPT does not specify where the aforementioned “surveys, interviews, experiments…” come from and who they target, preventing users from fully understanding the internal workings of ChatGPT’s data collection process. As the ‘black box theory of AI’ suggests, users of ChatGPT are encouraged to focus on its outputs while its internal workings (in this case, its data collection methods) remain unclear (Toupin, 2023, p. 6). Thus, creators of ChatGPT obscure how its data collection methods target marginalized communities, allowing tech companies to continue gaining revenue by targeting marginalized communities. As targeted individuals are not informed about AI data collection methods, they do not have the information or language to protest against their maltreatment. Taken alongside my findings from Study 1, we can see that ChatGPT displays a pattern of dismissive behavior towards marginalized groups (such as women of color and people in poverty) in order to maintain the fictitious image that tech companies ‘do not’ exploit marginalized people for profit. 

What I suggest for AI’s Future

In accordance with Varon and Peña’s argument, AI should be built within a framework of consent and ensuring a “higher standard of living for the vulnerable and disadvantaged” (Varon & Peña, 2021, p. 19). This would include free and easily accessible information on AI for the public, such as ChatGPT’s methods of data collection. Ideally, both ChatGPT and Google searches would readily provide information into its data collection methods, without characterizing AI as an agent as we saw in Study 2. Such increased accessibility of the internal workings of AI would unveil technology’s ‘black box’, allowing the general public to be able to critique tech companies who rely on unethical means of data collection and non-consensually profiling people in poverty. 

AI systems that rely on consent are already in the process of being built, however, they lack the expansiveness of popular AI software such as ChatGPT and the Google search system due to tech companies’ drive for profit. For example, F’xa is a feminist chat box designed to educate people on the misogynistic biases embedded in AI, and how your data is not always kept private (Schofield, 2019). While it is a useful platform to educate viewers about the issues surrounding AI in an engaging manner, it lacks the intractability and versatility of more popular AI chat boxes such as ChatGPT. For example, F’xa does not have GPT’s back-and-forth dialogue or essay-generating abilities. 

Webb—the creator of F’xa, comments that big companies such as Amazon and Google will respond to consumer demand (Schofield, 2019). At the present moment, I would argue that consumer demand prioritizes ease and accessibility of using AI, which will contribute to the greater funding instilled in the intuitive ChatGPT over information-providing systems such as F’xa. However, as there is greater study and awareness of the misogynistic and racist biases entrenched in AI generation, consumer demand will hopefully pressure large companies to begin investing in feminist AI that values an open dialogue between its designers and users. This will ideally lead to popular AI programs such as ChatGPT being re-trained on more equitable source material, so we do not see the racism and misogyny of its creators resurface in ChatGPT responses. 

Conclusion 

Both of my studies demonstrated how ChatGPT will produce a dishonest narrative that contributes to the marginalization of women, people of color and poor populations, allowing tech companies to continue profiting off marginalized people. I would hope that as more people gain awareness of the biases of AI, there will be more effort in undoing the misogynistic and racist stereotypes that AI perpetuates, and more resources discussing its internal workings.

References

How do you gather data?. ChatGPT. Retrieved November 21, 2023, from chat.openai.com. 

Iftikhar, Asyia. (2023, October 2). How Sex Education Season 4 Let Down the Asexual Community With the Character of O. Pink News. http://www.thepinknews.com/2023/10/02/sex-education-asexual-o-yasmin-benoit. 

Marche, Stephen. (2022, December 16). Will ChatGPT Kill the Student Essay?. The Atlantic. http://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essa ys/672371. 

Noble, Safiya Umoja (2012). Searching for Black Girls: Old Traditions in New Media. (Doctoral dissertation, University of Illinois). Graduate Dissertations and Theses at Illinois

Schofield, Daisy. (2019, March 23). Understanding Bias in AI With the Help of a Feminist Chatbot. Dazed.

www.dazeddigital.com/science-tech/article/44489/1/understanding-ai-bias-feminist-intern et-chatbot-alexa-f-xa. 

Toupin, Sophie. (2023). Shaping Feminist Artificial Intelligence. New Media & Society, 26(1), p. 580-595. https://doi.org/10.1177/14614448221150776

Varon, J. and Peña, P. (2021), Artificial Intelligence and Consent: A Feminist Anti-Colonial Critique. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1602

Write a 5 page double spaced essay about Otis in Sex education and how he behaves misogynistically. Relate it to Lorde’s ‘poetry is not a luxury’ and Ahmed’s concept of the feminist killjoy. ChatGPT. Retrieved November 21, 2023, from chat.openai.com.

Appendix 

A) ChatGPT 

ChatGPT 

Certainly! Here’s an essay exploring Otis’ behavior in “Sex Education” and its relation to misogyny, utilizing Lorde’s “Poetry Is Not a Luxury” and Ahmed’s concept of the feminist killjoy: 

Title: Exploring Misogyny Through the Character of Otis in “Sex Education” 

Introduction: 

“Sex Education,” the popular TV series, delves into various societal themes, including sexuality, identity, and relationships. The protagonist, Otis Milburn, often depicted as a well-meaning teenager providing sex advice, presents an interesting case for analysis regarding his behavior and its reflection of misogynistic tendencies. This essay aims to dissect Otis’ actions and attitudes towards women, examining how they align with Lorde’s concept of poetry as a necessity and Ahmed’s feminist killjoy theory. 

Otis and Misogynistic Behavior: 

Throughout the series, Otis, despite his good intentions, displays behavior that could be interpreted as misogynistic. His initial motivations in providing sex advice stem from a desire to help his peers navigate their sexual experiences. However, his approach often involves objectifying women, reducing their complexities to mere sexual entities. This reduction of women to their sexual attributes perpetuates a form of misogyny by reinforcing traditional, stereotypical views of women’s roles and desires. 

Lorde’s “Poetry Is Not a Luxury”: 

Lorde’s assertion that “poetry is not a luxury” can be extrapolated to encompass the idea that the exploration of complex human experiences, including gender dynamics, is not a luxury but a necessity. Otis, in his role as a sex advisor, should consider the multifaceted nature of human

relationships, especially regarding gender, instead of simplifying issues or objectifying women. Lorde’s perspective emphasizes the importance of delving deeper into these issues beyond surface-level understanding, which Otis seems to lack initially. 

Ahmed’s Feminist Killjoy Concept: 

Sara Ahmed’s concept of the feminist killjoy refers to individuals who challenge societal norms, particularly those perpetuating inequality. In many instances, Otis displays discomfort when confronted with feminist perspectives that challenge his preconceived notions. When women around him challenge or question societal norms, Otis often reacts defensively or dismissively. His discomfort reflects a reluctance to acknowledge the validity of feminist critiques, portraying traits aligned with the discomfort Ahmed describes in the face of feminist dissent. 

Analyzing Otis’ Development: 

As the series progresses, Otis undergoes significant personal growth. He begins to recognize the impact of his actions and attitudes towards women. His interactions with female characters like Maeve, his mother Jean, and others lead to a gradual shift in his understanding of gender 

dynamics. He starts to acknowledge and challenge his previous behaviors, demonstrating an evolving awareness of the complexities of gender and sexuality. 

Conclusion: 

Otis in “Sex Education” initially embodies behaviors and attitudes that align with aspects of misogyny, especially in his objectification of women and discomfort with feminist perspectives. However, his character arc showcases a journey of self-reflection and growth, emphasizing the importance of evolving perspectives and challenging societal norms. The analysis of Otis’ behavior through Lorde’s assertion that “poetry is not a luxury” and Ahmed’s concept of the feminist killjoy illuminates the complexities of gender dynamics depicted in the series and highlights the significance of personal growth in combating misogyny. Ultimately, Otis’ character serves as a reflection of the ongoing struggle to navigate gender roles and biases in contemporary society.

B) ChatGPT 

ChatGPT 

I don’t gather data myself—I’m designed to work with the information I’ve been trained on. However, data can be gathered in various ways: through surveys, interviews, experiments, observations, databases, web scraping, sensors, and more. Each method has its strengths and limitations, depending on what kind of information you’re trying to collect and analyze.

Leave a comment