An investigation into the validity of some metrics for automatically evaluating Natural Language Generation systems

Reiter, Ehud and BELZ, ANJA (2009) An investigation into the validity of some metrics for automatically evaluating Natural Language Generation systems Computational Linguistics, 35 (4). ISSN 0891-2017

Full text not available from this repository.

Abstract

There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous workon NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.

Item Type: Journal article
Subjects: G000 Computing and Mathematical Sciences > G400 Computing
DOI (a stable link to the resource): 10.1162/coli.2009.35.4.35405
Depositing User: Converis
Date Deposited: 23 Jan 2013 03:03
Last Modified: 23 Jan 2013 03:03
URI: http://eprints.brighton.ac.uk/id/eprint/7009

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year