Written by Sasha Lootvoet
Edited By Colin Parker Griffiths
More than 7 billion people in the world today will order around Alexa—Amazon’s virtual assistant that is proudly labeled as “non-gendered AI” by its creators. Alexa will be ordered to cancel their noon appointment, send an email to their boss, create their shopping list, or even read a story to their children. Some will even direct a sexualized remark at the machine that embodies the perfectly submissive secretary, mother, and wife. None of them, however, will question the ethics of having a woman-like speaker ready to attend to each of their needs and desires throughout the day. As technology developed to allow computers and robots to behave in a human-like way, Artificial Intelligence (AI) has become an important subject of analysis in relation to current social issues. While most discourses emphasize the importance of privacy-related issues in the development of AI technologies, I instead wish to shift this focus to the role they have played, and will continue to play, in reinforcing arbitrary gender norms and sexist discrimination—especially through the standardization of smart home assistants and in AI-assisted recruitment.
Since the 1990s with the emergence of the first Personal Digital Assistant, Artificial Intelligence has been in constant evolution, and is now an integrated part of our daily lives. The introduction of Apple’s virtual assistant Siri in 2011 paved the way for Google Now as well as Microsoft’s Cortana, introducing voice-activated assistance in everyday life (Jovanovic, 2023). Three years later the smart speaker Amazon Echo came to the market, and was equipped with the first virtual assistant designed to be fully integrated into homes; her name is Alexa. Created to perform a wide range of tasks such as turning on the lights or adapting the temperature of a room, Alexa was a commercial success from the beginning. It comes as no surprise that Google followed suit in 2016 with the development of the Google Assistant made available through their Google Home speaker (Jovanovic, 2023).
These smart virtual assistants (SVAs) share one troubling similarity: they were all introduced to the market with the only option of answering consumers with what scholars designate as a female voice, both in terms of sound and choice of language (Hannon, 2016). In addition to a physiologic general difference of pitch, men and women have indeed been known to play with language in very different ways, which happens to perfectly reflect their respective roles in the established socially gendered hierarchy. Consequently, women tend to use more social words to establish a sense of connection through their choice of diction, and often use adverbs of nuance to convey their acknowledgment that their opinion might not be the right one. In contrast, men use a lot more self-referenced and authoritative words (Grusha, 2024). For this reason, people would rather hear a woman’s voice helping them in their daily lives, especially at home.
Tech companies are overwhelmingly conscious of consumers’ preferences when it comes to the gender of a voice (Habler et al., 2019); it seems very rational for them to align with these preferences and thus feminize smart virtual assistants for the pleasure of the white and middle-aged men who represent 60% of Alexa’s users. If they stepped out of the capitalist, profit-maximizing mindset for a moment, they might be able to consider the social impact that such a damaging choice enables. The reason consumers show preferences for women’s voices in a context of assistance is the same reason why giving in to these preferences is unethical: it counters every effort made by feminist movements to dismantle power dynamics in Western societies that naturalize the subservient status of women and other marginalized groups. Rather than risking overt controversy by publicly claiming that women belong where men need them to be, tech companies play their part in sexism more silently—by associating women with a subservient entity marketed as a technological necessity for our daily lives.
Some evolution has taken place in the last few years. In 2016, Apple introduced a masculine option for Siri’s voice, and today users can choose between multiple voices labeled with numbers rather than traditional binary genders, including a gender-neutral version. It was only recently that Amazon caught up to the feminist critics and also added a masculine counterpart to Alexa’s female voice in 2021 (Valecha, 2021). However, feminine voices remain the default setting for all virtual assistants—a rather sad metaphor to represent the status of women today in Western societies.
AI is more than a voice in our homes, it is now also a determinant actor in the distribution of work opportunities. As such, it has the potential of directly affecting women’s economic welfare and range of job options by conflating practices of sexist discrimination within hiring processes. Every step of recruitment procedures, primarily the searching, screening, interviewing, and selecting practices now resort to Artificial Intelligence to enhance recruitment quality and increase efficiency (Chen, 2023). Automating hiring processes through algorithms capable of analyzing massive amounts of data presents the possibility of a lesser impact of human biases. While this apparent impartiality makes AI an appealing technology for hiring processes, it has nevertheless proved to be far from objective when recruiting for some companies.
AI’s success in recruitment essentially relies on one of its branch technologies, i.e., machine learning algorithms. Simply said, machines are given access to data and can then learn from it; algorithms enable them to identify patterns, make decisions, and improve through experience and data. Biases then instill themselves in the hiring process at two crucial steps of AI’s machine learning: dataset selection and algorithm development (Chen, 2023). AI identifies patterns from an initial set of data, and later looks for the identified patterns in the pool of applicants for a specific job posting. The problem is that those initial datasets often lack representation and diversity, thus reflecting existing patterns of marginalization of certain groups (Chen, 2023). AI then reproduces those same patterns: if a tech company has a majority of men in engineering positions, AI will notice this uneven proportion in the data and take from it that men are likely to be better than women in those positions. More directly, engineers can reflect their biased beliefs and preconceptions onto the algorithmic model they create for these machines to base their analysis of data on (Chen, 2023). This was true for the Amazon hiring case, wherein gender was included as a data characteristic for the selection of resumes to consider (Weissman, 2018). Needless to say, other marginalized demographics such as people of color and disabled people suffer from this as well.
In light of this, it is clear that AI is not an omniscient entity independent from our control that will by itself determine the outcomes of future societies, but it does nonetheless shape and reflect our norms and systems of social organization. We, as humans, put a lot of ourselves in these machines—denying this results in a dangerous and instrumentalized discourse that not only perpetuates several social inequalities, but also leads to the invisibilization of such issues in social discourses. This must, on the contrary, be talked about if we hope to reverse AI’s discriminating tendencies and turn it into a tool to help us move towards justice and equality. Tech companies must make way for more representation and diversity in their innovation teams, because white men—as well-intentioned as they might be—will inevitably pass on their privileged experience to the knowledge systems of new technological devices that will end up maintaining this hierarchy. To this day, women hold less than 30% of tech positions in the US, while Black professionals account for only 4% of all tech workers (Fury, 2023). As for statistics on trans people in technology, they are virtually non-existent (Lynn, 2021). The alarm has already been rung for this situation, but it will have to continue ringing and demanding our attention as Artificial Intelligence will keep growing and entering more areas of our lives.
References
Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanity and Social Sciences Communication, 10(1). https://doi.org/10.1057/s41599-023-02079-x
Sharma, K. (n.d.). How to keep human bias out of AI [Video]. TED Talks. https://www.ted.com/talks/kriti_sharma_how_to_keep_human_bias_out_of_ai Jovanovic, P. (2023b, November 14). The history and evolution of virtual assistants. Tribulant Blog.
https://tribulant.com/blog/software/the-history-and-evolution-of-virtual-assistants-fro m-simple-chatbots-to-todays-advanced-ai-powered-systems/
Aanchal Valecha, TheSocialTalks: Team. (n.d.). Amazon welcomes Ziggy as the new male voice assistant. The Social Talk. https://thesocialtalks.com/news-analysis/amazon-welcomes-ziggy-as-the-new-male-v oice-assistant/
Alexa Statistics: Market Report & Data • GitNux. (2023, December 20). GITNUX. https://gitnux.org/alexa-statistics/
Fisher, E. (n.d.). Gender bias in AI: Why voice assistants are female. Adapt. https://www.adaptworldwide.com/insights/2021/gender-bias-in-ai-why-voice-assistant s-are-female
Lynn, S. (n.d.). Transgender in tech: More visibility but obstacles remain. ABC News. https://abcnews.go.com/Business/transgender-tech-visibility-obstacles-remain/story?i d=76374628
Fury, A. (2023, March 1). Diversity in tech: how diverse is the tech industry in 2023? | Jefferson Frank. Jefferson Frank. https://www.jeffersonfrank.com/fr/insights/diversity-in-tech
Hannon, C. (2016). Gender and status in voice user interfaces. Interactions, 23(3), 34–37. https://doi.org/10.1145/2897939
Habler, F., Schwind, V., & Henze, N. (2019b). Effects of smart virtual assistants’ gender and language. Proceedings of Mensch Und Computer 2019. https://doi.org/10.1145/3340764.3344441
Grusha. (2024, January 9). What are the differences in Language between Male and Female? Learnmate. https://learnmate.com.au/gender-differences-language/#:~:text=Men%20talk%20more %20about%20things,preference%20for%20equality%20and%20harmony.
Weissmann, J. (2018, October 10). Amazon created a hiring tool using AI. it immediately started discriminating against women. Slate Magazine. https://slate.com/business/2018/10/amazon-artificial-intelligence-hiring-discrimination-women.html
