Deep Convolutional Symmetric Encoder—Decoder Neural Networks to Predict Students’ Visual Attention

Prediction of visual attention is a new and challenging subject, and to the best of our knowledge, there are not many pieces of research devoted to the anticipation of students’ cognition when solving tests. The aim of this paper is to propose, implement, and evaluate a machine learning method that...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Symmetry (Basel) Ročník 13; číslo 12; s. 2246
Hlavní autori: Hachaj, Tomasz, Stolińska, Anna, Andrzejewska, Magdalena, Czerski, Piotr
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Basel MDPI AG 01.12.2021
Predmet:
ISSN:2073-8994, 2073-8994
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Prediction of visual attention is a new and challenging subject, and to the best of our knowledge, there are not many pieces of research devoted to the anticipation of students’ cognition when solving tests. The aim of this paper is to propose, implement, and evaluate a machine learning method that is capable of predicting saliency maps of students who participate in a learning task in the form of quizzes based on quiz questionnaire images. Our proposal utilizes several deep encoder–decoder symmetric schemas which are trained on a large set of saliency maps generated with eye tracking technology. Eye tracking data were acquired from students, who solved various tasks in the sciences and natural sciences (computer science, mathematics, physics, and biology). The proposed deep convolutional encoder–decoder network is capable of producing accurate predictions of students’ visual attention when solving quizzes. Our evaluation showed that predictions are moderately positively correlated with actual data with a coefficient of 0.547 ± 0.109. It achieved better results in terms of correlation with real saliency maps than state-of-the-art methods. Visual analyses of the saliency maps obtained also correspond with our experience and expectations in this field. Both source codes and data from our research can be downloaded in order to reproduce our results.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2073-8994
2073-8994
DOI:10.3390/sym13122246