Joint learning of word and label embeddings for sequence labelling in spoken language understanding

Wu, Jiewen and D’Haro, Luis Fernando and Chen, Nancy F. and Krishnaswamy, Pavitra and Banchs, Rafael E. (2019). Joint learning of word and label embeddings for sequence labelling in spoken language understanding. In: "Actas del 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)", 14/12/2019 - 18/12/2019, Singapore. pp. 800-806. https://doi.org/10.1109/ASRU46091.2019.9003735.

Description

Title: Joint learning of word and label embeddings for sequence labelling in spoken language understanding
Author/s:
  • Wu, Jiewen
  • D’Haro, Luis Fernando
  • Chen, Nancy F.
  • Krishnaswamy, Pavitra
  • Banchs, Rafael E.
Item Type: Presentation at Congress or Conference (Article)
Event Title: Actas del 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
Event Dates: 14/12/2019 - 18/12/2019
Event Location: Singapore
Title of Book: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
Date: 2019
Subjects:
Freetext Keywords: Slot-filling; recurrent neural network; distributional semantics; sequence labelling
Faculty: E.T.S.I. Telecomunicación (UPM)
Department: Ingeniería Electrónica
Creative Commons Licenses: Recognition - No derivative works - Non commercial

Full text

[img]
Preview
PDF - Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (1MB) | Preview

Abstract

We propose an architecture to jointly learn word and label embeddings for slot filling in spoken language understanding. The proposed approach encodes labels using a combination of word embeddings and straightforward word-label association from the training data. Compared to the state-ofthe- art methods, our approach does not require label embeddings as part of the input and therefore lends itself nicely to a wide range of model architectures. In addition, our architecture computes contextual distances between words and labels to avoid adding contextual windows, thus reducing memory footprint. We validate the approach on established spoken dialogue datasets and show that it can achieve state-of-the-art performance with much fewer trainable parameters.

Funding Projects

TypeCodeAcronymLeaderTitle
Government of SpainTIN2017-85854-C4-4-RUnspecifiedUnspecifiedAnálisis afectivo de información multimedia con comunicación inclusiva natural

More information

Item ID: 65332
DC Identifier: http://oa.upm.es/65332/
OAI Identifier: oai:oa.upm.es:65332
DOI: 10.1109/ASRU46091.2019.9003735
Official URL: https://ieeexplore.ieee.org/abstract/document/9003735
Deposited by: Memoria Investigacion
Deposited on: 17 Apr 2021 06:23
Last Modified: 17 Apr 2021 06:23
  • Logo InvestigaM (UPM)
  • Logo GEOUP4
  • Logo Open Access
  • Open Access
  • Logo Sherpa/Romeo
    Check whether the anglo-saxon journal in which you have published an article allows you to also publish it under open access.
  • Logo Dulcinea
    Check whether the spanish journal in which you have published an article allows you to also publish it under open access.
  • Logo de Recolecta
  • Logo del Observatorio I+D+i UPM
  • Logo de OpenCourseWare UPM