Shared-task evaluations in HLT: lessons for NLG

BELZ, ANJA and Kilgarriff, Adam (2006) Shared-task evaluations in HLT: lessons for NLG In: Proceedings of the 4th International Conference on Natural Language Generation (INLG'06), 15-16 July 2006, Sydney, Australia.

[img] Text
belz-kilg-2.pdf
Restricted to Registered users only

Download (27kB)

Abstract

While natural language generation (NLG) has a strong evaluation tradition, in particular in userbased and task-oriented evaluation, it has never evaluated different approaches and techniques by comparing their performance on the same tasks (shared-task evaluation, STE). NLG is characterised by a lack of consolidation of results, and by isolation from the rest of NLP where STE is now standard. It is, moreover, a shrinking field (state-of-the-art MT and summarisation no longer perform generation as a subtask) which lacks the kind of funding and participation that natural language understanding (NLU) has attracted.

Item Type: Contribution to conference proceedings in the public domain ( Full Paper)
Uncontrolled Keywords: Natural language generation
Subjects: Q000 Languages and Literature - Linguistics and related subjects > Q100 Linguistics
Faculties: Faculty of Science and Engineering > School of Computing, Engineering and Mathematics > Natural Language Technology
Depositing User: Converis
Date Deposited: 18 Nov 2007
Last Modified: 21 May 2014 11:01
URI: http://eprints.brighton.ac.uk/id/eprint/3200

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year