This paper presents an empirical study that harnesses the benefits of Positional Language Models (PLMs) as key of an effective methodology for understanding the gist of a discursive text via extractive summarization. We introduce an unsupervised, adaptive, and cost-efficient approach that integrates semantic information in the process.
Texts are linguistically analyzed, and then semantic information—specifically synsets and named entities—are integrated into the PLM, enabling the understanding of text, in line with its discursive structure.
The proposed unsupervised approach is tested for different summarization tasks within standard benchmarks. The results obtained are very competitive with respect to the state of the art, thus proving the effectiveness of this approach, which requires neither training data nor high-performance computing resources.
Authors: Marta Vicente y Elena Lloret
To be held on: October 2020