AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva

Saved in:
Bibliographic Details
Title: AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
Authors: Martina Mattiazzi
Source: RUA. Repositorio Institucional de la Universidad de Alicante
Universidad de Alicante (UA)
Feminismo/s, Iss 45, Pp 118-138 (2025)
Publisher Information: Universidad de Alicante Servicio de Publicaciones, 2025.
Publication Year: 2025
Subject Terms: Artificial intelligence, bias, Women. Feminism, linguistic sexism, Sesgos, Sexismo lingüístico, Inclusive language, Bias, google translate, canva, HQ1-2044, inteligencia artificial, sesgos, Canva, Google translate, lenguaje inclusivo, lingüística, sexismo lingüístico, estereotipos, Lingüística, Linguistic sexism, linguistics, HQ1101-2030.7, Linguistics, artificial intelligence, Inteligencia artificial, Lenguaje inclusivo, inclusive language, artificial intelligence, bias, Canva, Google translate, inclusive language, linguistics, linguistic sexism, stereotypes, The family. Marriage. Woman, Google translate, Stereotypes, Estereotipos, ethnic stereotypes, Canva
Description: The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.
Document Type: Article
File Description: application/pdf
ISSN: 1989-9998
DOI: 10.14198/fem.2025.45.05
Access URL: http://hdl.handle.net/10045/150175
https://doaj.org/article/8a8e95504c1d44deaef10ca412e5618a
Rights: CC BY NC SA
CC BY NC ND
Accession Number: edsair.doi.dedup.....02a14734ab6edc5abc48d2b68aa47e41
Database: OpenAIRE
Description
Abstract:The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.
ISSN:19899998
DOI:10.14198/fem.2025.45.05