Efficient illumination independent appearance-based face tracking

Buenaposada Biencinto, José Miguel; Muñoz, Enrique y Baumela Molina, Luis (2009). Efficient illumination independent appearance-based face tracking. "Image and Vision Computing", v. 27 (n. 5); pp. 560-578. ISSN 0262-8856. https://doi.org/10.1016/j.imavis.2008.04.015.

Descripción

Título: Efficient illumination independent appearance-based face tracking
Autor/es:
  • Buenaposada Biencinto, José Miguel
  • Muñoz, Enrique
  • Baumela Molina, Luis
Tipo de Documento: Artículo
Título de Revista/Publicación: Image and Vision Computing
Fecha: 2 Abril 2009
Volumen: 27
Materias:
Escuela: Facultad de Informática (UPM) [antigua denominación]
Departamento: Inteligencia Artificial
Licencias Creative Commons: Ninguna

Texto completo

[img]
Vista Previa
PDF (Document Portable Format) - Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (2MB) | Vista Previa

Resumen

One of the major challenges that visual tracking algorithms face nowadays is being able to cope with changes in the appearance of the target during tracking. Linear subspace models have been extensively studied and are possibly the most popular way of modelling target appearance. We introduce a linear subspace representation in which the appearance of a face is represented by the addition of two approxi- mately independent linear subspaces modelling facial expressions and illumination respectively. This model is more compact than previous bilinear or multilinear ap- proaches. The independence assumption notably simplifies system training. We only require two image sequences. One facial expression is subject to all possible illumina- tions in one sequence and the face adopts all facial expressions under one particular illumination in the other. This simple model enables us to train the system with no manual intervention. We also revisit the problem of efficiently fitting a linear subspace-based model to a target image and introduce an additive procedure for solving this problem. We prove that Matthews and Baker’s Inverse Compositional Approach makes a smoothness assumption on the subspace basis that is equiva- lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap- proaches in that we make no smoothness assumptions on the subspace basis. In the experiments conducted we show that the model introduced accurately represents the appearance variations caused by illumination changes and facial expressions. We also verify experimentally that our fitting procedure is more accurate and has better convergence rate than the other related approaches, albeit at the expense of a slight increase in computational cost. Our approach can be used for tracking a human face at standard video frame rates on an average personal computer.

Más información

ID de Registro: 1861
Identificador DC: http://oa.upm.es/1861/
Identificador OAI: oai:oa.upm.es:1861
Identificador DOI: 10.1016/j.imavis.2008.04.015
Depositado por: profesor Luis Baumela Molina
Depositado el: 20 Oct 2009 07:13
Ultima Modificación: 20 Abr 2016 07:03
  • Open Access
  • Open Access
  • Sherpa-Romeo
    Compruebe si la revista anglosajona en la que ha publicado un artículo permite también su publicación en abierto.
  • Dulcinea
    Compruebe si la revista española en la que ha publicado un artículo permite también su publicación en abierto.
  • Recolecta
  • e-ciencia
  • Observatorio I+D+i UPM
  • OpenCourseWare UPM