DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments

Lana Serrano, Sara; Villena Román, Julio y González Cristóbal, José Carlos (2011). DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments. En: "CLEF 2011 Labs and Workshop, Notebook Papers", 19/09/2011 - 22/09/2011, Amsterdam, Holanda. pp. 11-18.

Descripción

Título: DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments
Autor/es:
  • Lana Serrano, Sara
  • Villena Román, Julio
  • González Cristóbal, José Carlos
Tipo de Documento: Ponencia en Congreso o Jornada (Artículo)
Título del Evento: CLEF 2011 Labs and Workshop, Notebook Papers
Fechas del Evento: 19/09/2011 - 22/09/2011
Lugar del Evento: Amsterdam, Holanda
Título del Libro: Proceedings of CLEF 2011 Labs and Workshop, Notebook Papers
Fecha: 2011
Materias:
Escuela: E.U.I.T. Telecomunicación (UPM) [antigua denominación]
Departamento: Ingeniería y Arquitecturas Telemáticas [hasta 2014]
Licencias Creative Commons: Reconocimiento - Sin obra derivada - No comercial

Texto completo

[img]
Vista Previa
PDF (Document Portable Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (192kB) | Vista Previa

Resumen

This paper describes the participation of DAEDALUS at ImageCLEF 2011 Medical Retrieval task. We have focused on multimodal (or mixed) experiments that combine textual and visual retrieval. The main objective of our research has been to evaluate the effect on the medical retrieval process of the existence of an extended corpus that is annotated with the image type, associated to both the image itself and also to its textual description. For this purpose, an image classifier has been developed to tag each document with its class (1st level of the hierarchy: Radiology, Microscopy, Photograph, Graphic, Other) and subclass (2nd level: AN, CT, MR, etc.). For the textual-based experiments, several runs using different semantic expansion techniques have been performed. For the visual-based retrieval, different runs are defined by the corpus used in the retrieval process and the strategy for obtaining the class and/or subclass. The best results are achieved in runs that make use of the image subclass based on the classification of the sample images. Although different multimodal strategies have been submitted, none of them has shown to be able to provide results that are at least comparable to the ones achieved by the textual retrieval alone. We believe that we have been unable to find a metric for the assessment of the relevance of the results provided by the visual and textual processes

Más información

ID de Registro: 13322
Identificador DC: http://oa.upm.es/13322/
Identificador OAI: oai:oa.upm.es:13322
Depositado por: Memoria Investigacion
Depositado el: 28 Nov 2012 09:14
Ultima Modificación: 21 Abr 2016 12:38
  • Open Access
  • Open Access
  • Sherpa-Romeo
    Compruebe si la revista anglosajona en la que ha publicado un artículo permite también su publicación en abierto.
  • Dulcinea
    Compruebe si la revista española en la que ha publicado un artículo permite también su publicación en abierto.
  • Recolecta
  • e-ciencia
  • Observatorio I+D+i UPM
  • OpenCourseWare UPM