Hand Pose Recognition through MediaPipe Landmarks

Gil Martin, Manuel ORCID: https://orcid.org/0000-0002-4285-6224, San Segundo Hernández, Rubén ORCID: https://orcid.org/0000-0001-9659-5464 and Córdoba Herralde, Ricardo de ORCID: https://orcid.org/0000-0002-7136-9636 (2023). Hand Pose Recognition through MediaPipe Landmarks. En: "International Conference on Modeling Decisions for Artificial Intelligence", 19 - 22 June, 2023, Umeå, Sweden. ISBN 978-91-527-7293-5.

Descripción

Título: Hand Pose Recognition through MediaPipe Landmarks
Autor/es:
Tipo de Documento: Ponencia en Congreso o Jornada (Artículo)
Título del Evento: International Conference on Modeling Decisions for Artificial Intelligence
Fechas del Evento: 19 - 22 June, 2023
Lugar del Evento: Umeå, Sweden
Título del Libro: MDAI 2023 : Proceedings of the 20th International Conference on Modeling Decisions for Artificial Intelligence 2023
Fecha: 1 Enero 2023
ISBN: 978-91-527-7293-5
Materias:
Escuela: E.T.S.I. Telecomunicación (UPM)
Departamento: Ingeniería Electrónica
Licencias Creative Commons: Reconocimiento - Sin obra derivada - No comercial

Texto completo

[thumbnail of 10238733.pdf] PDF (Portable Document Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (2MB)

Resumen

This paper proposes a framework to recognize hand poses using a limited number of landmarks from images. This Hand Pose Recognition (HPR) system is composed of a signal processing module that extracts and processes the coordinates of specific points of the hand called landmarks, and a deep neural network module that models and classifies the hand poses. These specific points or landmarks are extracted automatically through MediaPipe software. Detecting hand poses from these points has two main advantages compared to traditional computer vision approaches: the information sent to the recognition module is smaller (points’ coordinates vs. a full image) and the classification is not affected by additional information included in the images (like the background). The experiments were carried out over two different datasets using the experimental setups of previous works. The proposed framework was able to obtain better performance than the best results reported in previous works. For example, in case of using the Tiny Hand Gesture Recognition Dataset, we obtained classification accuracies of 98.74 ± 0.08 % and 98.22 ± 0.06 % with simple or complex backgrounds, while the best reported accuracies in previous works (using the whole image) were 97.10 % and 85.30 % respectively. The proposed solution is able to provide high recognition performance independently of the background where the image is taken

Proyectos asociados

Tipo
Código
Acrónimo
Responsable
Título
Gobierno de España
PDC2021-120846-C42
Sin especificar
Sin especificar
Sin especificar
Gobierno de España
PID2021-126061OB-C43
Sin especificar
Sin especificar
Sin especificar

Más información

ID de Registro: 84969
Identificador DC: https://oa.upm.es/84969/
Identificador OAI: oai:oa.upm.es:84969
URL Portal Científico: https://portalcientifico.upm.es/es/ipublic/item/10238733
URL Oficial: https://www.mdai.cat/mdai2023/proc.mdai2023.usb.pd...
Depositado por: iMarina Portal Científico
Depositado el: 23 Nov 2024 16:23
Ultima Modificación: 23 Nov 2024 16:23