Shared-task evaluations in HLT: lessons for NLG


As of July 2018 University of Brighton Repository is no longer updated. Please see our new repository at

Belz, Anja and Kilgarriff, Adam (2006) Shared-task evaluations in HLT: lessons for NLG In: Proceedings of the 4th International Conference on Natural Language Generation (INLG'06), 15-16 July 2006, Sydney, Australia.

[img] Text
Restricted to Registered users only

Download (27kB)


While natural language generation (NLG) has a strong evaluation tradition, in particular in userbased and task-oriented evaluation, it has never evaluated different approaches and techniques by comparing their performance on the same tasks (shared-task evaluation, STE). NLG is characterised by a lack of consolidation of results, and by isolation from the rest of NLP where STE is now standard. It is, moreover, a shrinking field (state-of-the-art MT and summarisation no longer perform generation as a subtask) which lacks the kind of funding and participation that natural language understanding (NLU) has attracted.

Item Type: Contribution to conference proceedings in the public domain ( Full Paper)
Uncontrolled Keywords: Natural language generation
Subjects: Q000 Languages and Literature - Linguistics and related subjects > Q100 Linguistics
Faculties: Faculty of Science and Engineering > School of Computing, Engineering and Mathematics > Natural Language Technology
Depositing User: Converis
Date Deposited: 18 Nov 2007
Last Modified: 25 Feb 2015 14:49

Actions (login required)

View Item View Item


Downloads per month over past year