Comparative Analysis of Pretrained Encoder-Decoder Transformer Models for Extreme Text Summarization

Text summarization plays a pivotal role in condensing crucial information from huge volumes of text. This study investigates the utilization of pre-trained transformer models within the domain of text summarization, with a specific emphasis on extreme summarization. It delves into the effectiveness...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 Second International Conference on Advances in Computational Intelligence and Communication (ICACIC) s. 1 - 6
Hlavní autoři: RajyaLakshmi, Tamma, Kuppusamy, K.S.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 07.12.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Text summarization plays a pivotal role in condensing crucial information from huge volumes of text. This study investigates the utilization of pre-trained transformer models within the domain of text summarization, with a specific emphasis on extreme summarization. It delves into the effectiveness of two prominent models, text-to-text transfer transformers and bidirectional and auto-regressive transformers, when applied to the task of summarization, providing a comparative analysis of their capabilities. The experiments conducted in this study involve the utilization of the XSum and SciTLDR datasets. While fine-tuning T5 and BART on summarization tasks is a standard approach, we delve into the performance of these models without fine-tuning. Additionally, we explore the potential of other pretrained models, such as PREGASUS and distilBART, in generating concise and coherent summaries. This study contributes to the understanding of how pre-trained transformer models can be harnessed effectively for text summarization, especially in extreme summarization scenarios. The findings shed light on the performance, challenges, and potential of these models, opening avenues for further research in the field of automatic text summarization.
DOI:10.1109/ICACIC59454.2023.10435363