Language Model Transformers as Evaluators for Open-Domain Dialogues

Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.

  • Veröffentlicht in:
    Proceedings of the 28th International Conference on Computational Linguistics International Conference on Computational Linguistics (COLING)
  • Typ:
    Inproceedings
  • Autoren:
    R. Nedelchev, J. Lehmann, R. Usbeck
  • Jahr:
    2020

Informationen zur Zitierung

R. Nedelchev, J. Lehmann, R. Usbeck: Language Model Transformers as Evaluators for Open-Domain Dialogues, International Conference on Computational Linguistics (COLING), Proceedings of the 28th International Conference on Computational Linguistics, 2020, http://dx.doi.org/10.18653/v1/2020.coling-main.599, Nedelchev.etal.2020,