2008 | OriginalPaper | Chapter
Robust Visual Tracking Based on an Effective Appearance Model
Authors : Xi Li, Weiming Hu, Zhongfei Zhang, Xiaoqin Zhang
Published in: Computer Vision – ECCV 2008
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Most existing appearance models for visual tracking usually construct a pixel-based representation of object appearance so that they are incapable of fully capturing both global and local spatial layout information of object appearance. In order to address this problem, we propose a novel spatial Log-Euclidean appearance model (referred as
SLAM
) under the recently introduced Log-Euclidean Riemannian metric [23].
SLAM
is capable of capturing both the global and local spatial layout information of object appearance by constructing a block-based Log-Euclidean eigenspace representation. Specifically, the process of learning the proposed
SLAM
consists of five steps—appearance block division, online Log-Euclidean eigenspace learning, local spatial weighting, global spatial weighting, and likelihood evaluation. Furthermore, a novel online Log-Euclidean Riemannian subspace learning algorithm (
IRSL
) [14] is applied to incrementally update the proposed
SLAM
. Tracking is then led by the Bayesian state inference framework in which a particle filter is used for propagating sample distributions over the time. Theoretic analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed
SLAM
.