Error pattern recognition and correction methods in English oral learning process based on deep learning.

Uloženo v:
Podrobná bibliografie
Název: Error pattern recognition and correction methods in English oral learning process based on deep learning.
Autoři: Jing, Yan
Zdroj: Journal of Computational Methods in Sciences & Engineering; Jul2025, Vol. 25 Issue 4, p3269-3281, 13p
Témata: LONG short-term memory, PATTERN recognition systems, DEEP learning, STATISTICAL significance, ENGLISH language
Abstrakt: This study addresses the problem of pattern recognition and correction in English oral learning using deep learning techniques. English oral errors hinder effective expression and communication among learners. To tackle this issue, a large dataset of English oral data was collected and preprocessed using Hamming window and Fourier transform techniques. A long short-term memory (LSTM) network was then employed to construct an English oral error recognition model. After training, the model demonstrated high accuracy in recognizing various oral error patterns. To evaluate the effectiveness of the proposed correction method, comparative experiments were conducted with two classes. The results showed that the LSTM-based model achieved an error recognition accuracy of over 95.39%, with an average accuracy of 97% for five common oral errors. Additionally, Class 1, using the correction method proposed in this study, showed an average score increase to 75.2, while Class 2, using traditional correction methods, only increased to 64.5. The average score difference of 10.7 points between the two classes was statistically significant (t-value = 4.217, p -value = 0.016, p <.05). The findings demonstrate that the LSTM-based oral error recognition model is both practical and effective, offering a precise method for recognizing and correcting oral errors, thereby improving the overall effectiveness of English oral learning. [ABSTRACT FROM AUTHOR]
Copyright of Journal of Computational Methods in Sciences & Engineering is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Complementary Index
Popis
Abstrakt:This study addresses the problem of pattern recognition and correction in English oral learning using deep learning techniques. English oral errors hinder effective expression and communication among learners. To tackle this issue, a large dataset of English oral data was collected and preprocessed using Hamming window and Fourier transform techniques. A long short-term memory (LSTM) network was then employed to construct an English oral error recognition model. After training, the model demonstrated high accuracy in recognizing various oral error patterns. To evaluate the effectiveness of the proposed correction method, comparative experiments were conducted with two classes. The results showed that the LSTM-based model achieved an error recognition accuracy of over 95.39%, with an average accuracy of 97% for five common oral errors. Additionally, Class 1, using the correction method proposed in this study, showed an average score increase to 75.2, while Class 2, using traditional correction methods, only increased to 64.5. The average score difference of 10.7 points between the two classes was statistically significant (t-value = 4.217, p -value = 0.016, p <.05). The findings demonstrate that the LSTM-based oral error recognition model is both practical and effective, offering a precise method for recognizing and correcting oral errors, thereby improving the overall effectiveness of English oral learning. [ABSTRACT FROM AUTHOR]
ISSN:14727978
DOI:10.1177/14727978251318798