Grammatical Error Correction with Denoising Autoencoder

A denoising autoencoder sequence-to-sequence model based on transformer architecture proved to be useful for underlying tasks such as summarization, machine translation, or question answering. This paper investigates the possibilities of using this model type for grammatical error correction and int...

Full description

Saved in:
Bibliographic Details
Published in:International journal of advanced computer science & applications Vol. 12; no. 8
Main Authors: Pajak, Krzysztof, Gonczarek, Adam
Format: Journal Article
Language:English
Published: West Yorkshire Science and Information (SAI) Organization Limited 2021
Subjects:
ISSN:2158-107X, 2156-5570
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A denoising autoencoder sequence-to-sequence model based on transformer architecture proved to be useful for underlying tasks such as summarization, machine translation, or question answering. This paper investigates the possibilities of using this model type for grammatical error correction and introduces a novel method of remark-based model checkpoint output combining. This approach was evaluated by the BEA 2019 shared task. It was able to achieve state-of-the-art F-score results on the test set 73.90 and development set 56.58. This was done without any GEC-specific pre-training, but only by fine-tuning the autoencoder model and combining checkpoint outputs. This proves that an efficient model solving GEC might be trained in a matter of hours using a single GPU.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2021.0120893