A Satisfaction-based Model for Affect Recognition from Conversational Features in Spoken Dialog Systems

Lebai Lutfi, Syaheerah Binti; Fernández Martínez, Fernando; Lucas Cuesta, Juan Manuel; López Lebón, Lorena y Montero Martínez, Juan Manuel (2013). A Satisfaction-based Model for Affect Recognition from Conversational Features in Spoken Dialog Systems. "Speech Communication", v. 55 (n. 7-8); pp. 825-840. ISSN 0167-6393. https://doi.org/10.1016/j.specom.2013.04.005.

Descripción

Título: A Satisfaction-based Model for Affect Recognition from Conversational Features in Spoken Dialog Systems
Autor/es:
  • Lebai Lutfi, Syaheerah Binti
  • Fernández Martínez, Fernando
  • Lucas Cuesta, Juan Manuel
  • López Lebón, Lorena
  • Montero Martínez, Juan Manuel
Tipo de Documento: Artículo
Título de Revista/Publicación: Speech Communication
Fecha: Septiembre 2013
Volumen: 55
Materias:
Palabras Clave Informales: Automatic affect detection, Affective spoken dialog system, Domestic environment, HiFi agent, Social intelligence, Dialog features, Conversational cues, User bias, Predicting user satisfaction
Escuela: E.T.S.I. Telecomunicación (UPM)
Departamento: Ingeniería Electrónica
Licencias Creative Commons: Reconocimiento - Sin obra derivada - No comercial

Texto completo

[img]
Vista Previa
PDF (Document Portable Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (5MB) | Vista Previa

Resumen

Detecting user affect automatically during real-time conversation is the main challenge towards our greater aim of infusing social intelligence into a natural-language mixed-initiative High-Fidelity (Hi-Fi) audio control spoken dialog agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. This paper attempts to address part of this challenge by considering the role of user satisfaction ratings and also conversational/dialog features in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. However, given the laboratory constraints, users might be positively biased when rating the system, indirectly making the reliability of the satisfaction data questionable. Machine learning experiments were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. Our results indicated that standard classifiers were significantly more successful in discriminating the abovementioned emotions and their intensities (reflected by user satisfaction ratings) from annotator data than from user data. These results corroborated that: first, satisfaction data could be used directly as an alternative target variable to model affect, and that they could be predicted exclusively by dialog features. Second, these were only true when trying to predict the abovementioned emotions using annotator?s data, suggesting that user bias does exist in a laboratory-led evaluation.

Más información

ID de Registro: 15781
Identificador DC: http://oa.upm.es/15781/
Identificador OAI: oai:oa.upm.es:15781
Identificador DOI: 10.1016/j.specom.2013.04.005
URL Oficial: http://www.sciencedirect.com/science/article/pii/S0167639313000472
Depositado por: Memoria Investigacion
Depositado el: 06 Jul 2013 07:29
Ultima Modificación: 01 Oct 2015 22:56
  • Open Access
  • Open Access
  • Sherpa-Romeo
    Compruebe si la revista anglosajona en la que ha publicado un artículo permite también su publicación en abierto.
  • Dulcinea
    Compruebe si la revista española en la que ha publicado un artículo permite también su publicación en abierto.
  • Recolecta
  • e-ciencia
  • Observatorio I+D+i UPM
  • OpenCourseWare UPM