Digital Module 18: Automated Scoring https://ncme.elevate.commpartners.com

Uloženo v:
Podrobná bibliografie
Název: Digital Module 18: Automated Scoring https://ncme.elevate.commpartners.com
Autoři: Sue Lottridge, Amy Burkhardt, Michelle Boyer
Zdroj: Educational Measurement: Issues and Practice. 39:141-142
Informace o vydavateli: Wiley, 2020.
Rok vydání: 2020
Témata: 0504 sociology, 4. Education, 05 social sciences, 0503 education
Popis: In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open‐ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows scores to be returned faster at lower cost. In the module, they discuss automated scoring from a number of perspectives. First, they discuss benefits and weaknesses of automated scoring, and what psychometricians should know about automated scoring. Next, they describe the overall process of automated scoring, moving from data collection to engine training to operational scoring. Then, they describe how automated scoring systems work, including the basic functions around score prediction as well as other flagging methods. Finally, they conclude with a discussion of the specific validity demands around automated scoring and how they align with the larger validity demands around test scores. Two data activities are provided. The first is an interactive activity that allows the user to train and evaluate a simple automated scoring engine. The second is a worked example that examines the impact of rater error on test scores. The digital module contains a link to an interactive web application as well as its R‐Shiny code, diagnostic quiz questions, activities, curated resources, and a glossary.
Druh dokumentu: Article
Jazyk: English
ISSN: 1745-3992
0731-1745
DOI: 10.1111/emip.12388
Rights: Wiley Online Library User Agreement
Přístupové číslo: edsair.doi...........d8ce442e4ac61733c348b39c8c13f7dd
Databáze: OpenAIRE
Popis
Abstrakt:In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open‐ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows scores to be returned faster at lower cost. In the module, they discuss automated scoring from a number of perspectives. First, they discuss benefits and weaknesses of automated scoring, and what psychometricians should know about automated scoring. Next, they describe the overall process of automated scoring, moving from data collection to engine training to operational scoring. Then, they describe how automated scoring systems work, including the basic functions around score prediction as well as other flagging methods. Finally, they conclude with a discussion of the specific validity demands around automated scoring and how they align with the larger validity demands around test scores. Two data activities are provided. The first is an interactive activity that allows the user to train and evaluate a simple automated scoring engine. The second is a worked example that examines the impact of rater error on test scores. The digital module contains a link to an interactive web application as well as its R‐Shiny code, diagnostic quiz questions, activities, curated resources, and a glossary.
ISSN:17453992
07311745
DOI:10.1111/emip.12388