GPT detectors are biased against non-native English writers
GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a mo...
Saved in:
| Published in: | Patterns (New York, N.Y.) Vol. 4; no. 7; p. 100779 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
Elsevier Inc
14.07.2023
Elsevier |
| Subjects: | |
| ISSN: | 2666-3899, 2666-3899 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape. |
|---|---|
| Bibliography: | content type line 23 SourceType-Scholarly Journals-1 ObjectType-News-1 These authors contributed equally |
| ISSN: | 2666-3899 2666-3899 |
| DOI: | 10.1016/j.patter.2023.100779 |