Full text
Preview |
PDF
- Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (111kB) | Preview |
Lebai Lutfi, Syaheerah Binti, Fernández Martínez, Fernando ORCID: https://orcid.org/0000-0003-3877-0089, Casanova García, Andrés, López Lebón, Lorena and Montero Martínez, Juan Manuel
ORCID: https://orcid.org/0000-0002-7908-5400
(2012).
Assessing user bias in affect detection within context-based spoken dialog systems.
In: "ASE/IEEE International Conference on Social Computing and 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust", 03/09/2012 - 06/09/2012, Amsterdam, The Netherlands.
Title: | Assessing user bias in affect detection within context-based spoken dialog systems |
---|---|
Author/s: |
|
Item Type: | Presentation at Congress or Conference (Article) |
Event Title: | ASE/IEEE International Conference on Social Computing and 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust |
Event Dates: | 03/09/2012 - 06/09/2012 |
Event Location: | Amsterdam, The Netherlands |
Title of Book: | ASE/IEEE International Conference on Social Computing and 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust |
Date: | 2012 |
Subjects: | |
Faculty: | E.T.S.I. Telecomunicación (UPM) |
Department: | Ingeniería Electrónica |
Creative Commons Licenses: | Recognition - No derivative works - Non commercial |
Preview |
PDF
- Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (111kB) | Preview |
This paper presents an empirical evidence of user bias within a laboratory-oriented evaluation of a Spoken Dialog System. Specifically, we addressed user bias in their satisfaction judgements. We question the reliability of this data for modeling user emotion, focusing on contentment and frustration in a spoken dialog system. This bias is detected through machine learning experiments that were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. The target used was the satisfaction rating and the predictors were conversational/dialog features. Our results indicated that standard classifiers were significantly more successful in discriminating frustration and contentment and the intensities of these emotions (reflected by user satisfaction ratings) from annotator data than from user data. Indirectly, the results showed that conversational features are reliable predictors of the two abovementioned emotions.
Item ID: | 19784 |
---|---|
DC Identifier: | https://oa.upm.es/19784/ |
OAI Identifier: | oai:oa.upm.es:19784 |
Official URL: | http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumb... |
Deposited by: | Memoria Investigacion |
Deposited on: | 17 Sep 2013 16:38 |
Last Modified: | 21 Apr 2016 21:13 |