DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments

Lana Serrano, Sara and Villena Román, Julio and González Cristóbal, José Carlos (2011). DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments. In: "CLEF 2011 Labs and Workshop, Notebook Papers", 19/09/2011 - 22/09/2011, Amsterdam, Holanda. pp. 11-18.

Description

Title: DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments
Author/s:
  • Lana Serrano, Sara
  • Villena Román, Julio
  • González Cristóbal, José Carlos
Item Type: Presentation at Congress or Conference (Article)
Event Title: CLEF 2011 Labs and Workshop, Notebook Papers
Event Dates: 19/09/2011 - 22/09/2011
Event Location: Amsterdam, Holanda
Title of Book: Proceedings of CLEF 2011 Labs and Workshop, Notebook Papers
Date: 2011
Subjects:
Faculty: E.U.I.T. Telecomunicación (UPM)
Department: Ingeniería y Arquitecturas Telemáticas [hasta 2014]
Creative Commons Licenses: Recognition - No derivative works - Non commercial

Full text

[img]
Preview
PDF - Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (192kB) | Preview

Abstract

This paper describes the participation of DAEDALUS at ImageCLEF 2011 Medical Retrieval task. We have focused on multimodal (or mixed) experiments that combine textual and visual retrieval. The main objective of our research has been to evaluate the effect on the medical retrieval process of the existence of an extended corpus that is annotated with the image type, associated to both the image itself and also to its textual description. For this purpose, an image classifier has been developed to tag each document with its class (1st level of the hierarchy: Radiology, Microscopy, Photograph, Graphic, Other) and subclass (2nd level: AN, CT, MR, etc.). For the textual-based experiments, several runs using different semantic expansion techniques have been performed. For the visual-based retrieval, different runs are defined by the corpus used in the retrieval process and the strategy for obtaining the class and/or subclass. The best results are achieved in runs that make use of the image subclass based on the classification of the sample images. Although different multimodal strategies have been submitted, none of them has shown to be able to provide results that are at least comparable to the ones achieved by the textual retrieval alone. We believe that we have been unable to find a metric for the assessment of the relevance of the results provided by the visual and textual processes

More information

Item ID: 13322
DC Identifier: http://oa.upm.es/13322/
OAI Identifier: oai:oa.upm.es:13322
Deposited by: Memoria Investigacion
Deposited on: 28 Nov 2012 09:14
Last Modified: 21 Apr 2016 12:38
  • Logo InvestigaM (UPM)
  • Logo GEOUP4
  • Logo Open Access
  • Open Access
  • Logo Sherpa/Romeo
    Check whether the anglo-saxon journal in which you have published an article allows you to also publish it under open access.
  • Logo Dulcinea
    Check whether the spanish journal in which you have published an article allows you to also publish it under open access.
  • Logo de Recolecta
  • Logo del Observatorio I+D+i UPM
  • Logo de OpenCourseWare UPM