Full text
Preview |
PDF
- Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (2MB) | Preview |
Buenaposada Biencinto, José Miguel and Muñoz, Enrique and Baumela Molina, Luis (2009). Efficient illumination independent appearance-based face tracking. "Image and Vision Computing", v. 27 (n. 5); pp. 560-578. ISSN 0262-8856. https://doi.org/10.1016/j.imavis.2008.04.015.
Title: | Efficient illumination independent appearance-based face tracking |
---|---|
Author/s: |
|
Item Type: | Article |
Título de Revista/Publicación: | Image and Vision Computing |
Date: | 2 April 2009 |
ISSN: | 0262-8856 |
Volume: | 27 |
Subjects: | |
Faculty: | Facultad de Informática (UPM) |
Department: | Inteligencia Artificial |
Creative Commons Licenses: | None |
Preview |
PDF
- Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (2MB) | Preview |
One of the major challenges that visual tracking algorithms face nowadays is being
able to cope with changes in the appearance of the target during tracking. Linear
subspace models have been extensively studied and are possibly the most popular
way of modelling target appearance. We introduce a linear subspace representation
in which the appearance of a face is represented by the addition of two approxi-
mately independent linear subspaces modelling facial expressions and illumination
respectively. This model is more compact than previous bilinear or multilinear ap-
proaches. The independence assumption notably simplifies system training. We only
require two image sequences. One facial expression is subject to all possible illumina-
tions in one sequence and the face adopts all facial expressions under one particular
illumination in the other. This simple model enables us to train the system with
no manual intervention. We also revisit the problem of efficiently fitting a linear
subspace-based model to a target image and introduce an additive procedure for
solving this problem. We prove that Matthews and Baker’s Inverse Compositional
Approach makes a smoothness assumption on the subspace basis that is equiva-
lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs
from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap-
proaches in that we make no smoothness assumptions on the subspace basis. In the
experiments conducted we show that the model introduced accurately represents
the appearance variations caused by illumination changes and facial expressions.
We also verify experimentally that our fitting procedure is more accurate and has
better convergence rate than the other related approaches, albeit at the expense of
a slight increase in computational cost. Our approach can be used for tracking a
human face at standard video frame rates on an average personal computer.
Item ID: | 1861 |
---|---|
DC Identifier: | https://oa.upm.es/1861/ |
OAI Identifier: | oai:oa.upm.es:1861 |
DOI: | 10.1016/j.imavis.2008.04.015 |
Deposited by: | profesor Luis Baumela Molina |
Deposited on: | 20 Oct 2009 07:13 |
Last Modified: | 20 Apr 2016 07:03 |