Belz, Anja and Kilgarriff, Adam (2006) Shared-task evaluations in HLT: lessons for NLG In: Proceedings of the 4th International Conference on Natural Language Generation (INLG'06), 15-16 July 2006, Sydney, Australia.
Restricted to Registered users only
While natural language generation (NLG) has a strong evaluation tradition, in particular in userbased and task-oriented evaluation, it has never evaluated different approaches and techniques by comparing their performance on the same tasks (shared-task evaluation, STE). NLG is characterised by a lack of consolidation of results, and by isolation from the rest of NLP where STE is now standard. It is, moreover, a shrinking field (state-of-the-art MT and summarisation no longer perform generation as a subtask) which lacks the kind of funding and participation that natural language understanding (NLU) has attracted.
|Item Type:||Contribution to conference proceedings in the public domain ( Full Paper)|
|Uncontrolled Keywords:||Natural language generation|
|Subjects:||Q000 Languages and Literature - Linguistics and related subjects > Q100 Linguistics|
|Faculties:||Faculty of Science and Engineering > School of Computing, Engineering and Mathematics > Natural Language Technology|
|Date Deposited:||18 Nov 2007|
|Last Modified:||25 Feb 2015 14:49|
Actions (login required)
Downloads per month over past year