More polished, not necessarily more learned: LLMs and perceived text quality in higher education
Saved in:
| Title: | More polished, not necessarily more learned: LLMs and perceived text quality in higher education |
|---|---|
| Authors: | Tärning, Betty, Tjøstheim, Trond A., Wallin, Annika |
| Contributors: | Lund University, Profile areas and other strong research environments, Lund University Profile areas, LU Profile Area: Natural and Artificial Cognition, Lunds universitet, Profilområden och andra starka forskningsmiljöer, Lunds universitets profilområden, LU profilområde: Naturlig och artificiell kognition, Originator, Lund University, Joint Faculties of Humanities and Theology, Departments, Department of Philosophy, Cognitive modeling, Lunds universitet, Humanistiska och teologiska fakulteterna, Institutioner, Filosofiska institutionen, Kognitiv modellering, Originator, Lund University, Profile areas and other strong research environments, Strategic research areas (SRA), eSSENCE: The e-Science Collaboration, Lunds universitet, Profilområden och andra starka forskningsmiljöer, Strategiska forskningsområden (SFO), eSSENCE: The e-Science Collaboration, Originator, Lund University, Joint Faculties of Humanities and Theology, Departments, Department of Philosophy, The Educational Technology Group, Lunds universitet, Humanistiska och teologiska fakulteterna, Institutioner, Filosofiska institutionen, The Educational Technology Group, Originator, Lund University, Joint Faculties of Humanities and Theology, Departments, Department of Philosophy, Cognitive Science, Lunds universitet, Humanistiska och teologiska fakulteterna, Institutioner, Filosofiska institutionen, Kognitionsvetenskap, Originator |
| Source: | Frontiers in Artificial Intelligence. |
| Subject Terms: | Natural Sciences, Computer and Information Sciences, Artificial Intelligence, Naturvetenskap, Data- och informationsvetenskap (Datateknik), Artificiell intelligens, Social Sciences, Educational Sciences, Samhällsvetenskap, Utbildningsvetenskap |
| Description: | The use of Large Language Models (LLMs) such as ChatGPT is a prominent topic in higher education, prompting debate over their educational impact. Studies on the effect of LLMs on learning in higher education often rely on self-reported data, leaving an opening for complimentary methodologies. This study contributes by analysing actual course grades as well as ratings by fellow students to investigate how LLMs can affect academic outcomes. We investigated whether using LLMs affected students’ learning by allowing them to choose one of three options for a written assignment: (1) composing the text without LLM assistance; (2) writing a first draft and using an LLM for revisions; or (3) generating a first draft with an LLM and then revising it themselves. Students’ learning was measured by their scores on a mid-course exam and final course grades. Additionally, we assessed how the students rate the quality of fellow students’ texts for each of the three conditions. Finally we examined how accurately fellow students could identify which LLM option (1–3) was used for a given text. Our results indicate only a weak effect of LLM use. However, writing a first draft and using an LLM for revisions compared favourably to the ‘no LLM’ baseline in terms of final grades. Ratings for fellow students’ texts was higher for texts created using option 3, specifically regarding how well-written they were judged to be. Regarding text classification, students most accurately predicted the ‘no LLM’ baseline, but were unable to identify texts that were generated by an LLM and then edited by a student at a rate better than chance. |
| File Description: | electronic |
| Access URL: | https://lucris.lub.lu.se/ws/files/234491786/Ta_rning_2025_-_More_polished_not_necessarily_more_learned.pdf |
| Database: | SwePub |
Be the first to leave a comment!
Full Text Finder
Nájsť tento článok vo Web of Science