Performance Study on Extractive Text Summarization Using BERT Models
Author's Department
Computer Science & Engineering Department
Find in your Library
https://doi.org/10.3390/info13020067
Document Type
Research Article
Publication Title
Information
Publication Date
1-28-2022
doi
10.3390/info13020067
Abstract
The task of summarization can be categorized into two methods, extractive and abstractive. Extractive summarization selects the salient sentences from the original document to form a summary while abstractive summarization interprets the original document and generates the summary in its own words. The task of generating a summary, whether extractive or abstractive, has been studied with different approaches in the literature, including statistical-, graph-, and deep learning-based approaches. Deep learning has achieved promising performances in comparison to the classical approaches, and with the advancement of different neural architectures such as the attention network (commonly known as the transformer), there are potential areas of improvement for the summarization task. The introduction of transformer architecture and its encoder model “BERT” produced an improved performance in downstream tasks in NLP. BERT is a bidirectional encoder representation from a transformer modeled as a stack of encoders. There are different sizes for BERT, such as BERT-base with 12 encoders and BERT-larger with 24 encoders, but we focus on the BERT-base for the purpose of this study. The objective of this paper is to produce a study on the performance of variants of BERT-based models on text summarization through a series of experiments, and propose “SqueezeBERTSum”, a trained summarization model fine-tuned with the SqueezeBERT encoder variant, which achieved competitive ROUGE scores retaining the BERTSum baseline model performance by 98%, with 49% fewer trainable parameters.
First Page
1
Last Page
10
Recommended Citation
APA Citation
Abdel-Salam, S.
&
Rafea, A.
(2022). Performance Study on Extractive Text Summarization Using BERT Models. Information, 13(2), 1–10.
10.3390/info13020067
https://fount.aucegypt.edu/faculty_journal_articles/4867
MLA Citation
Abdel-Salam, Shehab, et al.
"Performance Study on Extractive Text Summarization Using BERT Models." Information, vol. 13,no. 2, 2022, pp. 1–10.
https://fount.aucegypt.edu/faculty_journal_articles/4867