Leveraging Machine Learning to Explain the Nature of Written Genres

The analysis of discourse and the study of what characterizes it in terms of communicative objectives is essential to most tasks of Natural Language Processing. Consequently, research on textual genres as expressions of such objectives presents an opportunity to enhance both automatic techniques and resources. To conduct an investigation of this kind, it is necessary

To What Extent does Content Selection affect Surface Realization in the context of Headline Generation?

Headline generation is a task where the most important information of a news article is condensed and embodied into a single short sentence. This task is normally addressed by summarization techniques, ideally combining extractive and abstractive methods together with sentence compression or fusion techniques. Although Natural Language Generation (NLG) techniques have not been directly exploited

Relevant Content Selection through Positional Language Models: An Exploratory Analysis

Extractive Summarisation, like other areas in Natural Language Processing, has succumbed to the general trend marked by the success of neural approaches. However, the required resources—computational, temporal, data—are not always available. We present an experimental study of a method based on statistical techniques that, exploiting the semantic information from the source and its structure, provides

A Discourse-Informed Approach for Cost-Effective Extractive Summarization

This paper presents an empirical study that harnesses the benefits of Positional Language Models (PLMs) as key of an effective methodology for understanding the gist of a discursive text via extractive summarization. We introduce an unsupervised, adaptive, and cost-efficient approach that integrates semantic information in the process. Texts are linguistically analyzed, and then semantic information—specifically

Optimizing Data-Driven Models for Summarization as Parallel Tasks

This paper presents tackling of a hard optimization problem of computational linguistics, specifically automatic multi-document text summarization, using grid computing. The main challenge of multi-document summarization is to extract the most relevant and unique information effectively and efficiently from a set of topic-related documents, constrained to a specified length. In the Big Data/Text era, where

Applying Natural Language Processing Techniques to Generate Open Data Web APIs Documentation

Information access globalisation has resulted in the continuous growing of online available data on the Web, especially open data portals. However, in current open data portals, data is difficult to understand and access. One of the reasons of such difficulty is the lack of suitable mechanisms to extract and learn valuable information from existing open

BACK TO TOP